Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Recommendation with Personalized Review Retrieval and Aspect Learning (2306.12657v1)

Published 22 Jun 2023 in cs.SI

Abstract: Explainable recommendation is a technique that combines prediction and generation tasks to produce more persuasive results. Among these tasks, textual generation demands large amounts of data to achieve satisfactory accuracy. However, historical user reviews of items are often insufficient, making it challenging to ensure the precision of generated explanation text. To address this issue, we propose a novel model, ERRA (Explainable Recommendation by personalized Review retrieval and Aspect learning). With retrieval enhancement, ERRA can obtain additional information from the training sets. With this additional information, we can generate more accurate and informative explanations. Furthermore, to better capture users' preferences, we incorporate an aspect enhancement component into our model. By selecting the top-n aspects that users are most concerned about for different items, we can model user representation with more relevant details, making the explanation more persuasive. To verify the effectiveness of our model, extensive experiments on three datasets show that our model outperforms state-of-the-art baselines (for example, 3.4% improvement in prediction and 15.8% improvement in explanation for TripAdvisor).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms, 11(9):137, 2018.
  2. Generate natural language explanations for recommendation. CoRR, abs/2101.03392, 2021.
  3. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In Yike Guo and Faisal Farooq, editors, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2060–2069, 2018.
  4. Neural attentional rating regression with review-level explanations. In Pierre-Antoine Champin, Fabien Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis, editors, Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1583–1592, 2018.
  5. Co-attentive multi-task learning for explainable recommendation. In Sarit Kraus, editor, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pages 2137–2143, 2019.
  6. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, pages 5998–6008, 2017.
  7. Personalized transformer for explainable recommendation. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 4947–4957, 2021.
  8. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
  9. Dense passage retrieval for open-domain question answering. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769–6781, 2020.
  10. Efficient passage retrieval with hashing for open-domain question answering. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 979–986, 2021.
  11. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, 2002.
  12. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Shlomo Geva, Andrew Trotman, Peter Bruza, Charles L. A. Clarke, and Kalervo Järvelin, editors, The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 83–92, 2014.
  13. Explainable recommendation: A survey and new perspectives. Foundations and Trends in Information Retrieval, 14(1):1–101, 2020.
  14. How should i explain? a comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4):367–382, 2014.
  15. Li Chen and Feng Wang. Explaining recommendations based on feature sentiments in product reviews. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, pages 17–28, 2017.
  16. Caesar: context-aware explanation based on supervised attention for service recommendations. Journal of Intelligent Information Systems, 57:147–170, 2021.
  17. Personalized fashion recommendation with visual explanations based on multimodal attention network: Towards visually explainable recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 765–774, 2019.
  18. Reinforcement knowledge graph reasoning for explainable recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 285–294, 2019.
  19. Neural logic reasoning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1365–1374, 2020.
  20. Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 345–354, 2017.
  21. Long short-term memory. Supervised sequence labelling with recurrent neural networks, pages 37–45, 2012.
  22. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734, October 2014.
  23. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, 2019.
  24. Sentence-bert: Sentence embeddings using siamese bert-networks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3980–3990, 2019.
  25. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63:1872–1897, 2020.
  26. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2825–2835, 2021.
  27. Unified language model pre-training for natural language understanding and generation. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 13063–13075, 2019.
  28. Keybld: Selecting key blocks with local pre-ranking for long document information retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2207–2211, 2021.
  29. Parade: Passage representation aggregation for document reranking. arXiv preprint arXiv:2008.09093, 2020.
  30. Improving personalized explanation generation through visualization. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 244–255, 2022.
  31. Generate neural template explanations for recommendation. In Mathieu d’Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudré-Mauroux, editors, Proceedings of the 29th ACM International Conference on Information and Knowledge Management, pages 755–764, 2020.
  32. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004.
  33. Probabilistic matrix factorization. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, pages 1257–1264, 2007.
  34. Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Ying Li, Bing Liu, and Sunita Sarawagi, editors, Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 426–434, 2008.
  35. Rexplug: Explainable recommendation using plug-and-play language model. In Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai, editors, The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 81–91, 2021.
  36. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and D. Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of JMLR Proceedings, pages 249–256, 2010.
  37. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  38. Generating long and informative reviews with aspect-aware coarse-to-fine decoding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1969–1979, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hao Cheng (190 papers)
  2. Shuo Wang (382 papers)
  3. Wensheng Lu (4 papers)
  4. Wei Zhang (1489 papers)
  5. Mingyang Zhou (27 papers)
  6. Kezhong Lu (2 papers)
  7. Hao Liao (34 papers)
Citations (14)