Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative News Recommendation (2403.03424v1)

Published 6 Mar 2024 in cs.IR

Abstract: Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by historical clicked news. However, they overlook the high-level connections among different news articles and also ignore the profound relationship between these news articles and users. And the definition of these methods dictates that they can only deliver news articles as-is. On the contrary, integrating several relevant news articles into a coherent narrative would assist users in gaining a quicker and more comprehensive understanding of events. In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the LLM to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news. Specifically, we propose GNR to implement the generative news recommendation paradigm. First, we compose the dual-level representation of news and users by leveraging LLM to generate theme-level representations and combine them with semantic-level representations. Next, in order to generate a coherent narrative, we explore the news relation and filter the related news according to the user preference. Finally, we propose a novel training method named UIFT to train the LLM to fuse multiple news articles in a coherent narrative. Extensive experiments show that GNR can improve recommendation accuracy and eventually generate more personalized and factually consistent narratives.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Neural News Recommendation with Long- and Short-term User Representations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, Anna Korhonen, David R. Traum, and Lluís Màrquez (Eds.). Association for Computational Linguistics, 336–345. https://doi.org/10.18653/v1/p19-1033
  2. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434 (2023).
  3. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023).
  4. Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint arXiv:2305.02182 (2023).
  5. Google news personalization: scalable online collaborative filtering. In Proceedings of the 16th International Conference on World Wide Web, WWW 2007, Banff, Alberta, Canada, May 8-12, 2007, Carey L. Williamson, Mary Ellen Zurko, Peter F. Patel-Schneider, and Prashant J. Shenoy (Eds.). ACM, 271–280. https://doi.org/10.1145/1242572.1242610
  6. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023).
  7. Retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain. In International Conference on Neural Information Processing. Springer, 341–356.
  8. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems. 1096–1102.
  9. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).
  10. Miaomiao Li and Licheng Wang. 2019. A Survey on Personalized News Recommendation Technology. IEEE Access 7 (2019), 145861–145879. https://doi.org/10.1109/ACCESS.2019.2944927
  11. Exploring Fine-tuning ChatGPT for News Recommendation. arXiv preprint arXiv:2311.05850 (2023).
  12. A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News. arXiv preprint arXiv:2306.10702 (2023).
  13. LLaRA: Aligning Large Language Models with Sequential Recommenders. arXiv preprint arXiv:2312.02445 (2023).
  14. ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. arXiv:2305.06566 (Aug. 2023). http://arxiv.org/abs/2305.06566 arXiv:2305.06566 [cs].
  15. ONCE: Boosting Content-based Recommendation with Both Open-and Closed-source Large Language Models. arXiv preprint arXiv:2305.06566 (2023).
  16. Chatgpt as a factual inconsistency evaluator for text summarization.
  17. LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv preprint arXiv:2307.15780 (2023).
  18. Large Language Model Augmented Narrative Driven Recommendations. arXiv preprint arXiv:2306.02250 (2023).
  19. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
  20. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
  21. Lkpnr: Llm and kg for personalized news recommendation framework. arXiv preprint arXiv:2308.12028 (2023).
  22. Large Language Models are Competitive Near Cold-start Recommenders for Language-and Item-based Preferences. In Proceedings of the 17th ACM Conference on Recommender Systems. 890–896.
  23. Convntm: conversational neural topic model. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 13609–13617.
  24. From Indeterminacy to Determinacy: Augmenting Logical Reasoning Capabilities with Large Language Models. arXiv preprint arXiv:2310.18659 (2023).
  25. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  26. DKN: Deep knowledge-aware network for news recommendation. In Proceedings of the 2018 world wide web conference. 1835–1844.
  27. Lei Wang and Ee-Peng Lim. 2023. Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv preprint arXiv:2304.03153 (2023).
  28. When large language model based agent meets user behavior analysis: A novel user simulation paradigm. arXiv preprint ArXiv:2306.02552 (2023).
  29. Generative recommendation: Towards next-generation recommender paradigm. arXiv preprint arXiv:2304.03516 (2023).
  30. Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models. arXiv preprint arXiv:2305.13112 (2023).
  31. Enhancing recommender systems with large language model reasoning graphs. arXiv preprint arXiv:2308.10835 (2023).
  32. Neural News Recommendation with Attentive Multi-View Learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, Sarit Kraus (Ed.). ijcai.org, 3863–3869. https://doi.org/10.24963/ijcai.2019/536
  33. Neural News Recommendation with Multi-Head Self-Attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 6389–6394. https://doi.org/10.18653/v1/D19-1671
  34. Neural News Recommendation with Multi-Head Self-Attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (Eds.). Association for Computational Linguistics, 6388–6393. https://doi.org/10.18653/v1/D19-1671
  35. Personalized News Recommendation: Methods and Challenges. ACM Trans. Inf. Syst. 41, 1 (2023), 24:1–24:50. https://doi.org/10.1145/3530257
  36. Empowering News Recommendation with Pre-trained Language Models. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, Fernando Diaz, Chirag Shah, Torsten Suel, Pablo Castells, Rosie Jones, and Tetsuya Sakai (Eds.). ACM, 1652–1656. https://doi.org/10.1145/3404835.3463069
  37. MIND: A Large-scale Dataset for News Recommendation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault (Eds.). Association for Computational Linguistics, 3597–3606. https://doi.org/10.18653/v1/2020.acl-main.331
  38. Going Beyond Local: Global Graph-Enhanced Personalized News Recommendations. arXiv preprint arXiv:2307.06576 (2023).
  39. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).
  40. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
  41. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shen Gao (49 papers)
  2. Jiabao Fang (3 papers)
  3. Quan Tu (16 papers)
  4. Zhitao Yao (2 papers)
  5. Zhumin Chen (78 papers)
  6. Pengjie Ren (95 papers)
  7. Zhaochun Ren (117 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com