Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty-Aware Explainable Recommendation with Large Language Models (2402.03366v1)

Published 31 Jan 2024 in cs.IR, cs.AI, cs.CL, and cs.LG

Abstract: Providing explanations within the recommendation system would boost user satisfaction and foster trust, especially by elaborating on the reasons for selecting recommended items tailored to the user. The predominant approach in this domain revolves around generating text-based explanations, with a notable emphasis on applying LLMs. However, refining LLMs for explainable recommendations proves impractical due to time constraints and computing resource limitations. As an alternative, the current approach involves training the prompt rather than the LLM. In this study, we developed a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2. We employed a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task. This strategy enables a more effective exploration of users' interests, improving recommendation effectiveness and user satisfaction. Through the experiments, our method achieving 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively, demonstrates superior performance over four SOTA methods in terms of explainability evaluation metric. In addition, we identified that the proposed model is able to ensure stable textual quality on the three public datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. I. Palomares, C. Porcel, L. Pizzato, I. Guy, and E. Herrera-Viedma, “Reciprocal recommender systems: Analysis of state-of-art literature, challenges and opportunities towards social recommendation,” Information Fusion, vol. 69, pp. 103–127, 2021.
  2. C.-S. Lin, C.-N. Tsai, S.-T. Su, J.-S. Jwo, C.-H. Lee, and X. Wang, “Predictive prompts with joint training of large language models for explainable recommendation,” Mathematics, vol. 11, no. 20, p. 4230, 2023.
  3. Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009.
  4. B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, “Item-based collaborative filtering recommendation algorithms,” in Proceedings of the 10th international conference on World Wide Web, 2001, pp. 285–295.
  5. G. Linden, B. Smith, and J. York, “Amazon. com recommendations: Item-to-item collaborative filtering,” IEEE Internet computing, vol. 7, no. 1, pp. 76–80, 2003.
  6. Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, and S. Ma, “Explicit factor models for explainable recommendation based on phrase-level sentiment analysis,” in Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, 2014, pp. 83–92.
  7. H. Guo, R. Tang, Y. Ye, Z. Li, and X. He, “Deepfm: A factorization-machine based neural network for ctr prediction,” in Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017, p. 1725–1731.
  8. L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.-W. Hon, “Unified language model pre-training for natural language understanding and generation,” Advances in neural information processing systems, vol. 32, 2019.
  9. Y. Zhang, X. Chen et al., “Explainable recommendation: A survey and new perspectives,” Foundations and Trends® in Information Retrieval, vol. 14, no. 1, pp. 1–101, 2020.
  10. P. Sun, L. Wu, K. Zhang, Y. Fu, R. Hong, and M. Wang, “Dual learning for explainable recommendation: Towards unifying user preference prediction and review generation,” in Proceedings of The Web Conference 2020, 2020, pp. 837–847.
  11. J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of naacL-HLT, vol. 1, 2019, p. 2.
  12. A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving language understanding by generative pre-training,” 2018.
  13. T. Liu, X. Wang, C. Lv, R. Zhen, and G. Fu, “Sentence matching with syntax-and semantics-aware bert,” in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 3302–3312.
  14. Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang, and J. Zhang, “Chat-rec: Towards interactive and explainable llms-augmented recommender system,” arXiv preprint arXiv:2303.14524, 2023.
  15. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  16. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  17. L. Li, Y. Zhang, and L. Chen, “Personalized prompt learning for explainable recommendation,” ACM Transactions on Information Systems, vol. 41, no. 4, pp. 1–26, 2023.
  18. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
  19. A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7482–7491.
  20. L. Liebel and M. Körner, “Auxiliary tasks in multi-task learning,” arXiv preprint arXiv:1805.06334, 2018.
  21. S. Hu, X. Wang, and S. Lyu, “Rank-based decomposable losses in machine learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  22. S. Hu, Y. Ying, X. Wang, and S. Lyu, “Sum of ranked range loss for supervised learning,” The Journal of Machine Learning Research, vol. 23, no. 1, pp. 4826–4869, 2022.
  23. S. Hu, Y. Ying, S. Lyu et al., “Learning by minimizing the sum of ranked range,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 013–21 023, 2020.
  24. Z. Allen-Zhu, Y. Li, and Z. Song, “A convergence theory for deep learning via over-parameterization,” in International conference on machine learning.   PMLR, 2019, pp. 242–252.
  25. S. Pan, D. Li, H. Gu, T. Lu, X. Luo, and N. Gu, “Accurate and explainable recommendation via review rationalization,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 3092–3101.
  26. Y. Xian, T. Zhao, J. Li, J. Chan, A. Kan, J. Ma, X. L. Dong, C. Faloutsos, G. Karypis, S. Muthukrishnan et al., “Ex3: Explainable attribute-aware item-set recommendations,” in Proceedings of the 15th ACM Conference on Recommender Systems, 2021, pp. 484–494.
  27. L. Li, Y. Zhang, and L. Chen, “Personalized transformer for explainable recommendation,” arXiv preprint arXiv:2105.11601, 2021.
  28. S. Seo, J. Huang, H. Yang, and Y. Liu, “Interpretable convolutional neural networks with dual local and global attention for review rating prediction,” in Proceedings of the eleventh ACM conference on recommender systems, 2017, pp. 297–305.
  29. H. Chen, X. Chen, S. Shi, and Y. Zhang, “Generate natural language explanations for recommendation,” arXiv preprint arXiv:2101.03392, 2021.
  30. J. Tan, S. Xu, Y. Ge, Y. Li, X. Chen, and Y. Zhang, “Counterfactual explainable recommendation,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1784–1793.
  31. J. Chen, Z. Liu, X. Huang, C. Wu, Q. Liu, G. Jiang, Y. Pu, Y. Lei, X. Chen, X. Wang et al., “When large language models meet personalization: Perspectives of challenges and opportunities,” arXiv preprint arXiv:2307.16376, 2023.
  32. B. Lester, R. Al-Rfou, and N. Constant, “The power of scale for parameter-efficient prompt tuning,” arXiv preprint arXiv:2104.08691, 2021.
  33. T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh, “Autoprompt: Eliciting knowledge from language models with automatically generated prompts,” arXiv preprint arXiv:2010.15980, 2020.
  34. X. L. Li and P. Liang, “Prefix-tuning: Optimizing continuous prompts for generation,” arXiv preprint arXiv:2101.00190, 2021.
  35. Y. Lu, R. Dong, and B. Smyth, “Why i like it: multi-task learning for recommendation and explanation,” in Proceedings of the 12th ACM Conference on Recommender Systems, 2018, pp. 4–12.
  36. X. Wang, Z. Luo, J. Hu, C. Feng, S. Hu, B. Zhu, X. Wu, and S. Lyu, “Deep reinforcement learning for image-to-image translation,” arXiv preprint arXiv:2309.13672, 2023.
  37. P. Li, Z. Wang, Z. Ren, L. Bing, and W. Lam, “Neural rating regression with abstractive tips generation for recommendation,” in Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, 2017, pp. 345–354.
  38. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  39. Z. Chen, X. Wang, X. Xie, T. Wu, G. Bu, Y. Wang, and E. Chen, “Co-attentive multi-task learning for explainable recommendation.” in IJCAI, 2019, pp. 2137–2143.
  40. Yelp, “Yelp: Read Reviews, Get Directions, and Join the Fun,” Available online, Accessed: 2024. [Online]. Available: https://www.yelp.com/
  41. TripAdvisor, “TripAdvisor: Read Reviews, Compare Prices & Book,” Available online, Accessed: 2024. [Online]. Available: https://www.tripadvisor.com/
  42. Amazon, “Amazon.com: Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more,” Available online, Accessed: 2024. [Online]. Available: https://www.amazon.com/
  43. L. Li, Y. Zhang, and L. Chen, “Generate neural template explanations for recommendation,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 755–764.
  44. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL ’02, 2001.
  45. C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” Meeting of the Association for Computational Linguistics,Meeting of the Association for Computational Linguistics, 2004.
  46. L. Dong, S. Huang, F. Wei, M. Lapata, M. Zhou, and K. Xu, “Learning to generate product reviews from attributes,” in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, 2017, pp. 623–632.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yicui Peng (1 paper)
  2. Hao Chen (1006 papers)
  3. Chingsheng Lin (1 paper)
  4. Guo Huang (1 paper)
  5. Jinrong Hu (23 papers)
  6. Hui Guo (49 papers)
  7. Bin Kong (15 papers)
  8. Shu Hu (63 papers)
  9. Xi Wu (100 papers)
  10. Xin Wang (1307 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com