Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CELA: Cost-Efficient Language Model Alignment for CTR Prediction (2405.10596v3)

Published 17 May 2024 in cs.IR

Abstract: Click-Through Rate (CTR) prediction holds a paramount position in recommender systems. The prevailing ID-based paradigm underperforms in cold-start scenarios due to the skewed distribution of feature frequency. Additionally, the utilization of a single modality fails to exploit the knowledge contained within textual features. Recent efforts have sought to mitigate these challenges by integrating Pre-trained LLMs (PLMs). They design hard prompts to structure raw features into text for each interaction and then apply PLMs for text processing. With external knowledge and reasoning capabilities, PLMs extract valuable information even in cases of sparse interactions. Nevertheless, compared to ID-based models, pure text modeling degrades the efficacy of collaborative filtering, as well as feature scalability and efficiency during both training and inference. To address these issues, we propose \textbf{C}ost-\textbf{E}fficient \textbf{L}anguage Model \textbf{A}lignment (\textbf{CELA}) for CTR prediction. CELA incorporates textual features and LLMs while preserving the collaborative filtering capabilities of ID-based models. This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency. Through extensive offline experiments, CELA demonstrates superior performance compared to state-of-the-art methods. Furthermore, an online A/B test conducted on an industrial App recommender system showcases its practical effectiveness, solidifying the potential for real-world applications of CELA.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023).
  2. Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics. arXiv:2110.01518 [cs.CL]
  3. Enhancing explicit and implicit feature interactions via information sharing for parallel deep ctr models. In Proceedings of the 30th ACM international conference on information & knowledge management. 3757–3766.
  4. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. 7–10.
  5. TaleBrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
  6. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
  7. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 320–335.
  8. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv preprint arXiv:1909.00512 (2019).
  9. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821 (2021).
  10. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299–315.
  11. DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017).
  12. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 585–593.
  13. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations. https://openreview.net/forum?id=nZeVKeeFYf9
  14. FiBiNET: combining feature importance and bilinear feature interaction for click-through rate prediction. In Proceedings of the 13th ACM Conference on Recommender Systems. 169–177.
  15. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351 (2019).
  16. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206.
  17. On the sentence embeddings from pre-trained language models. arXiv preprint arXiv:2011.05864 (2020).
  18. CTRL: Connect Collaborative and Language Model for CTR Prediction. arXiv preprint arXiv:2306.02841 (2023).
  19. Fi-gnn: Modeling feature interactions via graph neural networks for ctr prediction. In Proceedings of the 28th ACM international conference on information and knowledge management. 539–548.
  20. RecStudio: Towards a Highly-Modularized Recommender System. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2890–2900.
  21. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1754–1763.
  22. ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation. arXiv preprint arXiv:2308.11131 (2023).
  23. PTab: Using the Pre-trained Language Model for Modeling Tabular Data. arXiv preprint arXiv:2209.08060 (2022).
  24. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
  25. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017).
  26. CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models. In NeurIPS Efficient Natural Language and Speech Processing Workshop.
  27. Product-based neural networks for user response prediction. In 2016 IEEE 16th international conference on data mining (ICDM). IEEE, 1149–1154.
  28. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551.
  29. Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995–1000.
  30. Autoint: Automatic feature interaction learning via self-attentive neural networks. In Proceedings of the 28th ACM international conference on information and knowledge management. 1161–1170.
  31. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962 13 (2019).
  32. Attention is all you need. Advances in neural information processing systems 30 (2017).
  33. Deep & cross network for ad click predictions. In Proceedings of the ADKDD’17. 1–7.
  34. Dcn v2: Improved deep & cross network and practical lessons for web-scale learning to rank systems. In Proceedings of the web conference 2021. 1785–1797.
  35. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 38–45. https://www.aclweb.org/anthology/2020.emnlp-demos.6
  36. Disentangled self-attentive neural networks for click-through rate prediction. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3553–3557.
  37. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
  38. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 5941–5948.
  39. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1059–1068.
Citations (1)

Summary

  • The paper introduces the CELA framework that integrates PLMs with ID-based models to address cold-start issues and expand feature scope in CTR prediction.
  • The methodology employs a three-phase process—Domain-Adaptive Pre-training, Recommendation-Oriented Modal Alignment, and Multi-Modal Feature Fusion—to optimize model alignment while reducing training overhead.
  • Empirical results demonstrate improved offline metrics (AUC, Logloss) and real-world outcomes (eCPM, Download-Through Rate), underscoring CELA’s scalability and efficiency.

CELA: A Cost-Efficient Solution for CTR Prediction Using LLMs

Introduction

Understanding and predicting user behavior is at the heart of modern recommender systems, and an essential part of this is Click-Through Rate (CTR) prediction. Traditionally, industry has leaned on ID-based models for this task. These models encode user and item features into sparse one-hot vectors and transform them into dense embeddings, which are then fed through sophisticated feature interaction layers to predict CTR. Despite their success, ID-based models struggle with two main issues: dependency on historical data and limited feature scope.

To address these challenges, researchers have proposed integrating Pre-trained LLMs (PLMs) into recommender systems. The paper "CELA: Cost-Efficient LLM Alignment for CTR Prediction" introduces an innovative approach, blending the robustness of PLMs with the practical strengths of ID-based models.

The CELA Framework

CELA, or Cost-Efficient LLM Alignment, is designed to merge the advantages of textual feature utilization and ID-based models. It leverages PLMs to mitigate cold-start issues and broaden feature scope, all while maintaining a scalable and efficient system. The framework is a three-phase process:

  1. Domain-Adaptive Pre-training (DAP):
    • Adapt the PLM to domain-specific texts from the dataset.
    • Use techniques like Masked LLM (MLM) loss and SimCSE to ensure the PLM understands the domain and has effective, uniformly distributed embeddings.
    • Train an ID-based model separately to serve as a reference point.
  2. Recommendation-Oriented Modal Alignment (ROMA):
    • Align the textual representations of items from the PLM with item-side feature embeddings from the ID-based model using contrastive learning.
    • This alignment is performed at the item level, significantly reducing training overhead.
  3. Multi-Modal Feature Fusion (MF²):
    • Integrate aligned text representations with non-textual features in a new ID-based model.
    • This stage ensures collaborative filtering effectiveness while enriching the model with semantic knowledge.

These stages are designed to be iteratively refined, making the system progressively more accurate and efficient.

Key Contributions and Findings

The proposed approach delivers several notable contributions:

  • Efficient Integration: Integrates textual features with ID-based models in a model-agnostic way, requiring minimal changes to existing architectures.
  • Item-Level Alignment: Reduces training overhead by aligning item text representations at the item level rather than the interaction level, maintaining low latency during inference.
  • Empirical Success: Comprehensive experiments on public and industrial datasets, including an A/B test in a real-world app store scenario, demonstrate CELA's effectiveness. It achieved notable improvements in both offline metrics (AUC, Logloss) and real-world metrics (eCPM, Download-Through Rate).

Practical Implications

The practical implications of this research are substantial:

  • Scalability: Efficient training and inference make CELA suitable for real-world applications where scalability is critical.
  • Cold-Start Problem: By leveraging PLMs, CELA effectively addresses the cold-start problem, making it robust in scenarios with sparse interactions.
  • Enhanced Accuracy: The integration of textual features captured by PLMs enriches the model's understanding and prediction capabilities, leading to higher accuracy.

Future Directions

The research opens several intriguing avenues for future work:

  1. Exploration of Larger PLMs: Investigate the use of even larger and more sophisticated PLMs, while finding ways to manage the increased complexity and overhead.
  2. Cross-Domain Applications: Adapt and apply CELA to different domains and types of recommender systems beyond CTR prediction.
  3. Real-Time Adaptation: Develop techniques for real-time learning and adaptation to dynamically changing datasets, ensuring the model remains up-to-date and accurate.

Conclusion

The CELA framework represents a significant step forward in CTR prediction, combining the strengths of traditional ID-based models with the rich semantic capabilities of PLMs. It is efficient, scalable, and effective, making it a promising solution for modern recommender systems.

X Twitter Logo Streamline Icon: https://streamlinehq.com