Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-Scale User Modeling with Recurrent Neural Networks for Music Discovery on Multiple Time Scales (1708.06520v1)

Published 22 Aug 2017 in cs.IR

Abstract: The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are typically left without an effective user representation. In this paper, we present a new approach to model users through recurrent neural networks by sequentially processing consumed items, represented by any type of embeddings and other context features. This way we obtain semantically rich user representations, which capture a user's musical taste over time. Our experimental analysis on large-scale user data shows that our model can be used to predict future songs a user will likely listen to, both in the short and long term.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Cedric De Boom (15 papers)
  2. Rohan Agrawal (6 papers)
  3. Samantha Hansen (3 papers)
  4. Esh Kumar (1 paper)
  5. Romain Yon (1 paper)
  6. Ching-Wei Chen (7 papers)
  7. Thomas Demeester (76 papers)
  8. Bart Dhoedt (47 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.