Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text-like Encoding of Collaborative Information in Large Language Models for Recommendation (2406.03210v1)

Published 5 Jun 2024 in cs.IR

Abstract: When adapting LLMs for Recommendation (LLMRec), it is crucial to integrate collaborative information. Existing methods achieve this by learning collaborative embeddings in LLMs' latent space from scratch or by mapping from external models. However, they fail to represent the information in a text-like format, which may not align optimally with LLMs. To bridge this gap, we introduce BinLLM, a novel LLMRec method that seamlessly integrates collaborative information through text-like encoding. BinLLM converts collaborative embeddings from external models into binary sequences -- a specific text format that LLMs can understand and operate on directly, facilitating the direct usage of collaborative information in text-like format by LLMs. Additionally, BinLLM provides options to compress the binary sequence using dot-decimal notation to avoid excessively long lengths. Extensive experiments validate that BinLLM introduces collaborative information in a manner better aligned with LLMs, resulting in enhanced performance. We release our code at https://github.com/zyang1580/BinLLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yang Zhang (1129 papers)
  2. Keqin Bao (21 papers)
  3. Ming Yan (190 papers)
  4. Wenjie Wang (150 papers)
  5. Fuli Feng (143 papers)
  6. Xiangnan He (200 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.