Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Gradient Descent for Multi-Objective Recommender Systems (2001.00846v3)

Published 9 Dec 2019 in cs.IR, cs.AI, cs.LG, and stat.ML

Abstract: Recommender systems need to mirror the complexity of the environment they are applied in. The more we know about what might benefit the user, the more objectives the recommender system has. In addition there may be multiple stakeholders - sellers, buyers, shareholders - in addition to legal and ethical constraints. Simultaneously optimizing for a multitude of objectives, correlated and not correlated, having the same scale or not, has proven difficult so far. We introduce a stochastic multi-gradient descent approach to recommender systems (MGDRec) to solve this problem. We show that this exceeds state-of-the-art methods in traditional objective mixtures, like revenue and recall. Not only that, but through gradient normalization we can combine fundamentally different objectives, having diverse scales, into a single coherent framework. We show that uncorrelated objectives, like the proportion of quality products, can be improved alongside accuracy. Through the use of stochasticity, we avoid the pitfalls of calculating full gradients and provide a clear setting for its applicability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nikola Milojkovic (1 paper)
  2. Diego Antognini (27 papers)
  3. Giancarlo Bergamin (1 paper)
  4. Boi Faltings (76 papers)
  5. Claudiu Musat (38 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.