Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis (2401.04997v1)

Published 10 Jan 2024 in cs.IR
Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis

Abstract: Recently, LLMs such as ChatGPT have showcased remarkable abilities in solving general tasks, demonstrating the potential for applications in recommender systems. To assess how effectively LLMs can be used in recommendation tasks, our study primarily focuses on employing LLMs as recommender systems through prompting engineering. We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders. To conduct our analysis, we formalize the input of LLMs for recommendation into natural language prompts with two key aspects, and explain how our framework can be generalized to various recommendation scenarios. As for the use of LLMs as recommenders, we analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results based on the classification of LLMs. As for prompt engineering, we further analyze the impact of four important components of prompts, \ie task descriptions, user interest modeling, candidate items construction and prompting strategies. In each section, we first define and categorize concepts in line with the existing literature. Then, we propose inspiring research questions followed by experiments to systematically analyze the impact of different factors on two public datasets. Finally, we summarize promising directions to shed lights on future research.

Understanding the Application of LLMs in Recommender Systems

Introduction to LLMs as Recommenders

Recommender systems have become essential in helping users navigate through an overwhelming amount of content to find what truly interests them. These systems analyze user behavior and preferences to suggest items users might like. With the advent of LLMs, such as ChatGPT, a new potential has emerged for creating more sophisticated recommender systems. LLMs are equipped with a vast amount of world knowledge and language abilities, making it possible to understand and use text in ways that traditional models cannot. This overview examines how LLMs can be utilized as recommenders, discussing factors that influence their effectiveness.

Framework for LLMs in Recommender Systems

LLMs can be integrated into recommender systems in several ways: as stand-alone recommendation models that decide which items to recommend, as tools that extract semantic understanding from text to enhance traditional algorithms, or as simulators for generative agents in recommendation environments. The focus here is on their use as stand-alone recommendation models.

One must consider two crucial factors when it comes to prompting LLMs as recommenders: selecting the right LLM as a foundation model and the construction of prompts themselves. Open-source and closed-source LLMs each have their pros and cons, with closed-source models demonstrating higher capability in zero-shot performance. However, open-source models offer more flexibility as they can be fine-tuned with domain-specific data. Model architecture, parameter scale, and context length are additional attributes that influence an LLM's ability to make recommendations.

Prompt Engineering for LLM-Based Recommender Systems

The magic of prompting lies in crafting prompts that are clear and effectively tailor the innate abilities of an LLM to the task of making recommendations. This art requires addressing the nature of the task through careful task description, representing user interests appropriately, considering the nature and structure of candidate items, and applying strategic prompting strategies such as zero-shot and few-shot prompting or using specialized techniques like recency-focused or role-playing prompts. Each component of the prompt plays an essential role in guiding the LLM towards generating useful recommendations.

Empirical Analysis and Insights

Experiments on two public datasets revealed several insightful findings. Closed-source LLMs, especially the latest like GPT-4, display a robust ability for cold-start recommendations and can surpass certain traditional models. Open-source LLMs can also be fine-tuned for improvements but at the cost of computational efficiency.

Regarding prompt engineering, it is notable that recent item interaction holds significant weight, and prompts that emphasize recency tend to provide better results. LLMs still need explicit instructions to grasp user preferences; hence, adding summary generation of user profiles in prompts can aid in refining results. Interestingly, re-ranking candidates from traditional models with LLMs doesn’t always yield improvements, suggesting that the LLM's general knowledge can be harnessed more effectively in some contexts than in others.

Future Directions

The journey of using LLMs in recommender systems is just beginning. Research should move towards optimizing their efficiency for real-world applications, developing methods for knowledge distillation that retain the LLM's abilities in more nimble models, and expanding into multimodal recommendations. Moreover, fairness considerations and privacy issues are paramount, ensuring that LLMs are employed ethically and responsibly in an increasingly personalized digital experience.

In conclusion, LLMs have opened up a new horizon for recommender systems. While challenges remain, their advanced understanding and generation of natural language promise to revolutionize how systems understand and cater to individual user preferences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (145)
  1. Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata. In RecSys. ACM, 1.
  2. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434 (2023).
  3. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In RecSys. ACM, 1007–1014.
  4. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  5. Michael Buckland and Fredric Gey. 1994. The relationship between recall and precision. Journal of the American society for information science 45, 1 (1994), 12–19.
  6. Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models. arXiv preprint arXiv:2305.05973 (2023).
  7. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. 7–10.
  8. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/
  9. Leveraging large language models for pre-trained recommender systems. arXiv preprint arXiv:2308.10837 (2023).
  10. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).
  11. Knowledge-grounded Natural Language Recommendation Explanation. arXiv preprint arXiv:2308.15813 (2023).
  12. Uncovering ChatGPT’s Capabilities in Recommender Systems. In RecSys. ACM, 1126–1132.
  13. Llms may dominate information access: Neural retrievers are biased towards llm-generated texts. arXiv preprint arXiv:2310.20501 (2023).
  14. Evaluating ChatGPT as a Recommender System: A Rigorous Approach. arXiv preprint arXiv:2309.03613 (2023).
  15. Active prompting with chain-of-thought for large language models. arXiv preprint arXiv:2302.12246 (2023).
  16. Enhancing Job Recommendation through LLM-based Generative Adversarial Networks. arXiv preprint arXiv:2307.10747 (2023).
  17. Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046 (2023).
  18. Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023).
  19. A Unified Framework for Multi-Domain CTR Prediction via Large Language Models. arXiv:2312.10743 [cs.IR]
  20. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023).
  21. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299–315.
  22. Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 30–45.
  23. DeepFM: a factorization-machine based neural network for CTR prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 1725–1731.
  24. An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil transactions on benchmarks, standards and evaluations 2, 4 (2022), 100089.
  25. F Maxwell Harper. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5 4 (2015) 1–19. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5 4 (2015) 1–19.
  26. Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging. arXiv preprint arXiv:2309.01026 (2023).
  27. Leveraging Large Language Models for Sequential Recommendation. In RecSys. ACM, 1096–1102.
  28. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639–648.
  29. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
  30. Large Language Models as Zero-Shot Conversational Recommenders. In CIKM. ACM, 720–730.
  31. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 585–593.
  32. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).
  33. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).
  34. How to Index Item IDs for Recommendation Foundation Models. arXiv preprint arXiv:2305.06569 (2023).
  35. Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations. arXiv preprint arXiv:2308.16505 (2023).
  36. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20, 4 (2002), 422–446.
  37. Genrec: Large language model for generative recommendation. arXiv e-prints (2023), arXiv–2307.
  38. Lending Interaction Wings to Recommender Systems with Conversational Agents. arXiv preprint arXiv:2310.04230 (2023).
  39. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206.
  40. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023).
  41. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.
  42. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37.
  43. Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth?. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2627–2636.
  44. RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability. arXiv preprint arXiv:2311.10947 (2023).
  45. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474.
  46. Text Is All You Need: Learning Language Representations for Sequential Recommendation. arXiv preprint arXiv:2305.13731 (2023).
  47. Text Is All You Need: Learning Language Representations for Sequential Recommendation. In KDD. ACM, 1258–1267.
  48. GPT4Rec: A generative framework for personalized recommendation and user interests interpretation. arXiv preprint arXiv:2304.03879 (2023).
  49. Personalized prompt learning for explainable recommendation. ACM Transactions on Information Systems 41, 4 (2023), 1–26.
  50. Prompt Distillation for Efficient LLM-based Recommendation. In CIKM. ACM, 1348–1357.
  51. Large Language Models for Generative Recommendation: A Survey and Visionary Discussions. arXiv preprint arXiv:2309.01157 (2023).
  52. Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. arXiv preprint arXiv:2305.11700 (2023).
  53. CTRL: Connect Tabular and Language Model for CTR Prediction. arXiv preprint arXiv:2306.02841 (2023).
  54. E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation. arXiv:2312.02443 [cs.IR]
  55. Exploring Fine-tuning ChatGPT for News Recommendation. arXiv preprint arXiv:2311.05850 (2023).
  56. Exploring Fine-tuning ChatGPT for News Recommendation. arXiv:2311.05850 [cs.IR]
  57. PBNR: Prompt-based News Recommender System. arXiv preprint arXiv:2304.07862 (2023).
  58. A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News. arXiv preprint arXiv:2306.10702 (2023).
  59. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference. 689–698.
  60. LLaRA: Aligning Large Language Models with Sequential Recommenders. arXiv:2312.02445 [cs.IR]
  61. How Can Recommender Systems Benefit from Large Language Models: A Survey. arXiv preprint arXiv:2306.05817 (2023).
  62. ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation. arXiv preprint arXiv:2308.11131 (2023).
  63. A Multi-facet Paradigm to Bridge Large Language Model and Recommendation. arXiv preprint arXiv:2310.06491 (2023).
  64. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. In Proceedings of the ACM Web Conference 2022. 2320–2329.
  65. RecPrompt: A Prompt Tuning Framework for News Recommendation Using Large Language Models. arXiv:2312.10463 [cs.IR]
  66. Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models. arXiv:2312.16275 [cs.IR]
  67. Generated Knowledge Prompting for Commonsense Reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 3154–3169.
  68. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023).
  69. Llmrec: Benchmarking large language models on recommendation task. arXiv preprint arXiv:2308.12241 (2023).
  70. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35.
  71. A First Look at LLM-Powered Generative News Recommendation. arXiv preprint arXiv:2305.06566 (2023).
  72. RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation. arXiv:2312.16018 [cs.IR]
  73. Unlocking the Potential of Large Language Models for Explainable Recommendations. arXiv:2312.15661 [cs.IR]
  74. Large Language Models are Not Stable Recommender Systems. arXiv:2312.15746 [cs.IR]
  75. Large Language Model Augmented Narrative Driven Recommendations. In RecSys. ACM, 777–783.
  76. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 188–197.
  77. Multimodal meta-learning for cold-start sequential recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 3421–3430.
  78. Aleksandr V Petrov and Craig Macdonald. 2023. Generative Sequential Recommendation with GPTRec. arXiv preprint arXiv:2306.11114 (2023).
  79. ControlRec: Bridging the Semantic Gap between Language Model and Personalized Recommendation. arXiv:2311.16441 [cs.IR]
  80. U-BERT: Pre-training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4320–4327.
  81. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551.
  82. Representation Learning with Large Language Models for Recommendation. arXiv preprint arXiv:2310.15950 (2023).
  83. Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995–1000.
  84. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. 452–461.
  85. Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences. In RecSys. ACM, 890–896.
  86. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web. 285–295.
  87. In chatgpt we trust? measuring and characterizing the reliability of chatgpt. arXiv preprint arXiv:2304.08979 (2023).
  88. Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems. arXiv:2312.15599 [cs.IR]
  89. RAH! RecSys-Assistant-Human: A Human-Central Recommendation Framework with Large Language Models. arXiv preprint arXiv:2308.09904 (2023).
  90. Xiaoyuan Su and Taghi M Khoshgoftaar. 2009. A survey of collaborative filtering techniques. Advances in artificial intelligence 2009 (2009).
  91. Distillation is All You Need for Practically Using Different Pre-trained Recommendation Models. arXiv:2401.00797 [cs.IR]
  92. Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca.
  93. Survey on collaborative filtering, content-based filtering and hybrid recommendation system. International Journal of Computer Applications 110, 4 (2015), 31–36.
  94. Directed Acyclic Graph Factorization Machines for CTR Prediction via Knowledge Distillation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining. 715–723.
  95. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  96. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
  97. Attention is all you need. Advances in neural information processing systems 30 (2017).
  98. Multiple Key-value Strategy in Recommendation Systems Incorporating Large Language Model. arXiv preprint arXiv:2310.16409 (2023).
  99. Lei Wang and Ee-Peng Lim. 2023. Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv preprint arXiv:2304.03153 (2023).
  100. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432 (2023).
  101. RecAgent: A Novel Simulation Paradigm for Recommender Systems. arXiv preprint arXiv:2306.02552 (2023).
  102. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 (2023).
  103. Generative recommendation: Towards next-generation recommender paradigm. arXiv preprint arXiv:2304.03516 (2023).
  104. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174.
  105. Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models. In EMNLP. Association for Computational Linguistics, 10052–10065.
  106. Self-Consistency Improves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on Learning Representations.
  107. Enhancing recommender systems with large language model reasoning graphs. arXiv preprint arXiv:2308.10835 (2023).
  108. Recmind: Large language model powered agent for recommendation. arXiv preprint arXiv:2308.14296 (2023).
  109. DRDT: Dynamic Reflection with Divergent Thinking for LLM-based Sequential Recommendation. arXiv:2312.11336 [cs.IR]
  110. Zhoumeng Wang. 2023. Empowering Few-Shot Recommender Systems with Large Language Models – Enhanced Representations. arXiv:2312.13557 [cs.IR]
  111. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
  112. LLMRec: Large Language Models with Graph Augmentation for Recommendation. arXiv preprint arXiv:2311.00423 (2023).
  113. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 (2021).
  114. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1652–1656.
  115. Self-supervised graph learning for recommendation. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726–735.
  116. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860 (2023).
  117. Graph neural networks in recommender systems: a survey. Comput. Surveys 55, 5 (2022), 1–37.
  118. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica 10, 5 (2023), 1122–1136.
  119. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. arXiv preprint arXiv:2306.10933 (2023).
  120. Efficient Streaming Language Models with Attention Sinks. arXiv preprint arXiv:2309.17453 (2023).
  121. Towards a More User-Friendly and Easy-to-Use Benchmark Library for Recommender Systems. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2837–2847.
  122. PALR: Personalization Aware LLMs for Recommendation. arXiv e-prints (2023), arXiv–2305.
  123. Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V. arXiv preprint arXiv:2310.11441 (2023).
  124. Large Language Model Can Interpret Latent Space of Sequential Recommender. arXiv preprint arXiv:2310.20487 (2023).
  125. Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations. arXiv preprint arXiv:2311.10779 (2023).
  126. Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited. In SIGIR. ACM, 2639–2649.
  127. Where to go next for recommender systems? id-vs. modality-based recommender models revisited. arXiv preprint arXiv:2303.13835 (2023).
  128. LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking. arXiv preprint arXiv:2311.02089 (2023).
  129. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022).
  130. On Generative Agents in Recommendation. arXiv preprint arXiv:2310.10108 (2023).
  131. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. In RecSys. ACM, 993–999.
  132. AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems. arXiv preprint arXiv:2310.09233 (2023).
  133. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).
  134. Bridging the Information Gap Between Domain-Specific Model and General LLM for Personalized Recommendation. arXiv preprint arXiv:2311.03778 (2023).
  135. Language models as recommender systems: Evaluations and limitations. (2021).
  136. CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation. arXiv preprint arXiv:2310.19488 (2023).
  137. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 (2023).
  138. RecBole 2.0: Towards a More Up-to-Date Recommendation Library. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 4722–4726.
  139. RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms. In CIKM. ACM, 4653–4664.
  140. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
  141. Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation. arXiv preprint arXiv:2311.09049 (2023).
  142. BookGPT: A General Framework for Book Recommendation Empowered by Large Language Model. arXiv preprint arXiv:2305.15673 (2023).
  143. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In The Eleventh International Conference on Learning Representations.
  144. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management. 1893–1902.
  145. Collaborative Large Language Model for Recommender Systems. arXiv preprint arXiv:2311.01343 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Lanling Xu (5 papers)
  2. Junjie Zhang (79 papers)
  3. Bingqian Li (3 papers)
  4. Jinpeng Wang (48 papers)
  5. Mingchen Cai (9 papers)
  6. Wayne Xin Zhao (196 papers)
  7. Ji-Rong Wen (299 papers)
Citations (22)
X Twitter Logo Streamline Icon: https://streamlinehq.com