Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning (2212.03760v5)

Published 7 Dec 2022 in cs.IR and cs.CL

Abstract: Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications. Many of them benefit from utilizing users' behavior sequences as plain texts, representing rich information in any domain or system without losing generality. Hence, a question arises: Can LLMing for user history corpus help improve recommender systems? While its versatile usability has been widely investigated in many domains, its applications to recommender systems still remain underexplored. We show that LLMing applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks. Also, leveraging additional task-agnostic user histories delivers significant performance benefits. We further demonstrate that our approach can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kyuyong Shin (8 papers)
  2. Hanock Kwak (9 papers)
  3. Wonjae Kim (25 papers)
  4. Jisu Jeong (24 papers)
  5. Seungjae Jung (9 papers)
  6. Kyung-Min Kim (25 papers)
  7. Jung-Woo Ha (67 papers)
  8. Sang-Woo Lee (34 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.