Papers
Topics
Authors
Recent
Search
2000 character limit reached

Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction

Published 14 Feb 2024 in cs.IR | (2402.08976v3)

Abstract: In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics. CE loss focuses on maximizing the confidence of the ground truth item, which is challenging to achieve universally across all users and sessions. It also overlooks the practical acceptability of ranking the ground truth item within the top-$K$ positions, a common metric in SRecsys. To address this limitation, we propose \textbf{CPFT}, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top-$K$ metrics. CPFT embeds CP principles into the training loop using differentiable proxy losses and computationally efficient calibration strategies, enabling the generation of high-confidence prediction sets. These sets focus on items with high relevance while maintaining robust coverage guarantees. Extensive experiments on five real-world datasets and four distinct sequential models demonstrate that CPFT improves precision metrics and confidence calibration. Our results highlight the importance of confidence-aware fine-tuning in delivering accurate, trustworthy recommendations that enhance user satisfaction.

Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.