Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 32 tok/s Pro
2000 character limit reached

Pareto Front Approximation for Multi-Objective Session-Based Recommender Systems (2407.16828v3)

Published 23 Jul 2024 in cs.IR, cs.AI, and cs.LG

Abstract: This work introduces MultiTRON, an approach that adapts Pareto front approximation techniques to multi-objective session-based recommender systems using a transformer neural network. Our approach optimizes trade-offs between key metrics such as click-through and conversion rates by training on sampled preference vectors. A significant advantage is that after training, a single model can access the entire Pareto front, allowing it to be tailored to meet the specific requirements of different stakeholders by adjusting an additional input vector that weights the objectives. We validate the model's performance through extensive offline and online evaluation. For broader application and research, the source code is made available at https://github.com/otto-de/MultiTRON. The results confirm the model's ability to manage multiple recommendation objectives effectively, offering a flexible tool for diverse business needs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction 30, 1 (March 2020), 127–158. https://doi.org/10.1007/s11257-019-09256-1
  2. RecSys Challenge 2015 and the YOOCHOOSE Dataset. In Proceedings of the 9th ACM Conference on Recommender Systems. ACM, Vienna Austria, 357–358. https://doi.org/10.1145/2792838.2798723
  3. Transformers4Rec: Bridging the Gap between NLP and Sequential / Session-Based Recommendation. In Fifteenth ACM Conference on Recommender Systems. ACM, Amsterdam Netherlands, 143–153. https://doi.org/10.1145/3460231.3474255
  4. Toward an Estimation of Nadir Objective Vector Using a Hybrid of Evolutionary and Local Search Approaches. IEEE Transactions on Evolutionary Computation 14, 6 (Dec. 2010), 821–841. https://doi.org/10.1109/TEVC.2010.2041667
  5. DIGINETICA. 2016. CIKM Cup 2016 Track 2: Personalized E-Commerce Search Challenge. https://competitions.codalab.org/competitions/11161
  6. Alexey Dosovitskiy and Josip Djolonga. 2020. You Only Train Once: Loss-Conditional Training of Deep Networks. In International Conference on Learning Representations. https://api.semanticscholar.org/CorpusID:214278158
  7. The Hypervolume Indicator: Computational Problems and Algorithms. Comput. Surveys 54, 6 (July 2022), 1–42. https://doi.org/10.1145/3453474
  8. Balázs Hidasi and Alexandros Karatzoglou. 2018. Recurrent Neural Networks with Top-k Gains for Session-based Recommendations. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management. 843–852. https://doi.org/10.1145/3269206.3271761
  9. Session-based Recommendations with Recurrent Neural Networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1511.06939
  10. Pareto-based Multi-Objective Recommender System with Forgetting Curve. https://doi.org/10.48550/ARXIV.2312.16868 Version Number: 2.
  11. W. Kang and J. McAuley. 2018. Self-Attentive Sequential Recommendation. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE Computer Society, Los Alamitos, CA, USA, 197–206. https://doi.org/10.1109/ICDM.2018.00035
  12. Walid Krichene and Steffen Rendle. 2020. On Sampled Metrics for Item Recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, Virtual Event CA USA, 1748–1757. https://doi.org/10.1145/3394486.3403226
  13. Neural Attentive Session-based Recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, Singapore Singapore, 1419–1428. https://doi.org/10.1145/3132847.3132926
  14. STAN: Stage-Adaptive Network for Multi-Task Recommendation by Learning User Lifecycle-Based Representation. In Proceedings of the 17th ACM Conference on Recommender Systems. ACM, Singapore Singapore, 602–612. https://doi.org/10.1145/3604915.3608796
  15. Pareto Multi-Task Learning. In Thirty-third Conference on Neural Information Processing Systems (NeurIPS). 12037–12047.
  16. Debabrata Mahapatra and Vaibhav Rajan. 2020. Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 6597–6607. https://proceedings.mlr.press/v119/mahapatra20a.html
  17. Learning the Pareto Front with Hypernetworks. In International Conference on Learning Representations. https://openreview.net/forum?id=NjF772F4ZZR
  18. OTTO Recommender Systems Dataset. https://doi.org/10.34740/KAGGLE/DSV/4991874
  19. Michael Ruchte and Josif Grabocka. 2021. Scalable Pareto Front Approximation for Deep Multi-Objective Learning. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, Auckland, New Zealand, 1306–1311. https://doi.org/10.1109/ICDM51629.2021.00162
  20. A framework for controllable Pareto front learning with completed scalarization functions and its applications. Neural Networks 169 (Jan. 2024), 257–273. https://doi.org/10.1016/j.neunet.2023.10.029
  21. Scaling Session-Based Transformer Recommendations using Optimized Negative Sampling and Loss Functions. In Proceedings of the 17th ACM Conference on Recommender Systems. ACM, Singapore Singapore, 1023–1026. https://doi.org/10.1145/3604915.3610236
  22. On the Effectiveness of Sampled Softmax Loss for Item Recommendation. (2022). https://doi.org/10.48550/ARXIV.2201.02327
  23. Personalized Approximate Pareto-Efficient Recommendation. In Proceedings of the Web Conference 2021. ACM, Ljubljana Slovenia, 3839–3849. https://doi.org/10.1145/3442381.3450039

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube