Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 476 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

Optimizing Encoder-Only Transformers for Session-Based Recommendation Systems (2410.11150v1)

Published 15 Oct 2024 in cs.IR and cs.AI

Abstract: Session-based recommendation is the task of predicting the next item a user will interact with, often without access to historical user data. In this work, we introduce Sequential Masked Modeling, a novel approach for encoder-only transformer architectures to tackle the challenges of single-session recommendation. Our method combines data augmentation through window sliding with a unique penultimate token masking strategy to capture sequential dependencies more effectively. By enhancing how transformers handle session data, Sequential Masked Modeling significantly improves next-item prediction performance. We evaluate our approach on three widely-used datasets, Yoochoose 1/64, Diginetica, and Tmall, comparing it to state-of-the-art single-session, cross-session, and multi-relation approaches. The results demonstrate that our Transformer-SMM models consistently outperform all models that rely on the same amount of information, while even rivaling methods that have access to more extensive user history. This study highlights the potential of encoder-only transformers in session-based recommendation and opens the door for further improvements.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.