Dice Question Streamline Icon: https://streamlinehq.com

Verify FACE performance with larger-scale LLMs

Determine the performance of FACE (a general framework for mapping collaborative filtering embeddings into pretrained large language model tokens) when integrated with larger-scale large language models, assessing whether such integration improves semantic alignment and recommendation accuracy relative to the small-scale models used in current experiments.

Information Square Streamline Icon: https://streamlinehq.com

Background

FACE is evaluated using relatively small-scale LLMs (e.g., LLaMA2-7B, MiniLM-L6, Qwen2-7B) due to computational constraints. While the authors hypothesize that larger models could further enhance text embedding quality and vocabulary representation, they have not validated this empirically within the paper.

Establishing the performance and effects of integrating FACE with larger-scale LLMs would clarify how model scale influences semantic alignment and recommendation effectiveness, and would test the generality of the approach beyond the tested configurations.

References

Due to the computational resource constraints during experimentation, the current system adopts relatively small-scale LLMs (e.g., Llama2-7B, MiniLM-L6, Qwen2-7B); the framework's performance when integrated with larger-scale LLMs remains unverified.

FACE: A General Framework for Mapping Collaborative Filtering Embeddings into LLM Tokens (2510.15729 - Wang et al., 17 Oct 2025) in Appendix: Limitations