Towards Open-World Recommendation with Knowledge Augmentation from LLMs
The paper "Towards Open-World Recommendation with Knowledge Augmentation from LLMs" presents a novel framework, KAR (Knowledge-Augmented Recommendation), which addresses the limitations of traditional recommender systems (RSs) by integrating open-world knowledge from LLMs. This work proposes an innovative approach to bridge the gap between LLMs and RSs, offering a method for embedding rich, external knowledge into recommendation algorithms to improve prediction accuracy and generalizability.
Framework Overview
KAR is designed as a model-agnostic framework, emphasizing three primary stages for efficiently incorporating external knowledge from LLMs:
- Knowledge Reasoning and Generation: This stage leverages LLMs to extract relevant knowledge by executing a novel technique called "factorization prompting." This approach decomposes user preferences into fundamental factors, extracting both reasoning knowledge related to user preferences and factual knowledge about items. This factorization method helps mitigate the issue of the compositional gap in LLMs, allowing the model to effectively recall pertinent world knowledge aligned with user and item data.
- Knowledge Adaptation: The adaptation stage interprets and transforms the text-based knowledge generated by LLMs into dense vectors, compatible with recommendation models. This transformation involves encoding knowledge through a knowledge encoder and refining these representations via a hybrid-expert adaptor. The adaptor reduces dimensionality and tunes the embedding space to conform to the requirements of RSs, enhancing the reliability of the input data.
- Knowledge Utilization: Once the knowledge is adapted into suitable vectors, these are incorporated as additional features in existing recommendation models. By linking the reasoning and factual knowledge with traditional domain features, the framework allows RSs to leverage both collaborative filtering signals and extensive world knowledge.
Experimental Results
Empirical evaluations demonstrate that KAR significantly enhances various baselines across critical recommendation tasks, such as Click-Through Rate (CTR) prediction and re-ranking. Noteworthy numerical improvements were observed across multiple datasets, including MovieLens-1M and Amazon-Books, with AUC improvements of 1-2% over state-of-the-art methods. Moreover, KAR's deployment in Huawei’s news and music recommendation platforms substantiates its practical viability, showcasing a 7% and 1.7% improvement in online A/B tests, respectively.
Comparative Analysis and Advantages
KAR shows notable advancements over traditional enhancements, such as those relying solely on knowledge graphs or smaller PLMs. The dual extraction of reasoning and factual knowledge provides comprehensive insights that these other methods lack, encompassing both static item-related information and dynamic, inferred user preferences.
Implications and Future Directions
The framework's ability to prestore and preprocess knowledge ensures that it meets low-latency requirements, particularly crucial for large-scale systems where inference time must remain optimal. The paper encourages further exploration into richer interactions between LLMs and RSs, suggesting potential developments in adaptive LLM-based RS architectures that could dynamically update as new information becomes available. Despite the solid groundwork laid by KAR, future research could focus on directly addressing the privacy considerations and mitigating hallucination issues inherent in LLM deployments.
This research contributes significantly to the evolving domain of recommendation systems by integrating the reasoning capabilities of LLMs, pushing the boundaries of what is feasible in personalized content delivery and user satisfaction enhancement.