Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models Make Sample-Efficient Recommender Systems (2406.02368v1)

Published 4 Jun 2024 in cs.IR and cs.CL

Abstract: LLMs have achieved remarkable progress in the field of NLP, demonstrating remarkable abilities in producing text that resembles human language for various tasks. This opens up new opportunities for employing them in recommender systems (RSs). In this paper, we specifically examine the sample efficiency of LLM-enhanced recommender systems, which pertains to the model's capacity to attain superior performance with a limited quantity of training data. Conventional recommendation models (CRMs) often need a large amount of training data because of the sparsity of features and interactions. Hence, we propose and verify our core viewpoint: LLMs Make Sample-Efficient Recommender Systems. We propose a simple yet effective framework (i.e., Laser) to validate the viewpoint from two aspects: (1) LLMs themselves are sample-efficient recommenders; and (2) LLMs, as feature generators and encoders, make CRMs more sample-efficient. Extensive experiments on two public datasets show that Laser requires only a small fraction of training samples to match or even surpass CRMs that are trained on the entire training set, demonstrating superior sample efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jianghao Lin (47 papers)
  2. Xinyi Dai (32 papers)
  3. Rong Shan (11 papers)
  4. Bo Chen (309 papers)
  5. Ruiming Tang (171 papers)
  6. Yong Yu (219 papers)
  7. Weinan Zhang (322 papers)
Citations (2)