Knowledge-Augmented Relation Learning for Complementary Recommendation with Large Language Models (2509.05564v1)
Abstract: Complementary recommendations play a crucial role in e-commerce by enhancing user experience through suggestions of compatible items. Accurate classification of complementary item relationships requires reliable labels, but their creation presents a dilemma. Behavior-based labels are widely used because they can be easily generated from interaction logs; however, they often contain significant noise and lack reliability. While function-based labels (FBLs) provide high-quality definitions of complementary relationships by carefully articulating them based on item functions, their reliance on costly manual annotation severely limits a model's ability to generalize to diverse items. To resolve this trade-off, we propose Knowledge-Augmented Relation Learning (KARL), a framework that strategically fuses active learning with LLMs. KARL efficiently expands a high-quality FBL dataset at a low cost by selectively sampling data points that the classifier finds the most difficult and uses the label extension of the LLM. Our experiments showed that in out-of-distribution (OOD) settings, an unexplored item feature space, KARL improved the baseline accuracy by up to 37%. In contrast, in in-distribution (ID) settings, the learned item feature space, the improvement was less than 0.5%, with prolonged learning could degrade accuracy. These contrasting results are due to the data diversity driven by KARL's knowledge expansion, suggesting the need for a dynamic sampling strategy that adjusts diversity based on the prediction context (ID or OOD).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.