Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research (1911.07698v3)

Published 18 Nov 2019 in cs.IR, cs.LG, and cs.NE

Abstract: The design of algorithms that generate personalized ranked item lists is a central topic of research in the field of recommender systems. In the past few years, in particular, approaches based on deep learning (neural) techniques have become dominant in the literature. For all of them, substantial progress over the state-of-the-art is claimed. However, indications exist of certain problems in today's research practice, e.g., with respect to the choice and optimization of the baselines used for comparison, raising questions about the published claims. In order to obtain a better understanding of the actual progress, we have tried to reproduce recent results in the area of neural recommendation approaches based on collaborative filtering. The worrying outcome of the analysis of these recent works-all were published at prestigious scientific conferences between 2015 and 2018-is that 11 out of the 12 reproducible neural approaches can be outperformed by conceptually simple methods, e.g., based on the nearest-neighbor heuristics. None of the computationally complex neural methods was actually consistently better than already existing learning-based techniques, e.g., using matrix factorization or linear models. In our analysis, we discuss common issues in today's research practice, which, despite the many papers that are published on the topic, have apparently led the field to a certain level of stagnation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Maurizio Ferrari Dacrema (26 papers)
  2. Simone Boglio (2 papers)
  3. Paolo Cremonesi (31 papers)
  4. Dietmar Jannach (53 papers)
Citations (189)

Summary

Analysis of Reproducibility and Progress in Recommender Systems Research

Recommender systems are integral to modern digital applications, generating personalized ranked lists of items based on user preferences. Despite substantial claims of progress made by deep learning techniques in recent years, the paper "A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research" presents a critical evaluation of the current state of research. The authors scrutinize recent advances, specifically neural approaches in collaborative filtering, against simple, established baselines to assess actual deliverables in the field.

Key Findings

  • Reproducibility Challenges: Out of 26 relevant papers published between 2015 and 2018 in top-tier conferences (SIGIR, KDD, WWW, IJCAI, WSDM, ACM RecSys), only 12 could be reliably reproduced using available code or data. This statistic highlights a significant issue in research transparency and accountability.
  • Methodological Stagnation: Examination revealed that 11 out of 12 reproducible neural approaches did not outperform simple methods like nearest-neighbor heuristic or basic linear models. This contradicts the pervasive narrative of advancements, suggesting stagnation rather than progress.
  • Baseline Optimization: Poor optimization of baseline comparisons often leads to exaggerated claims of success. This issue spans not just recommender systems, but other fields utilizing deep learning techniques, raising questions about empirical rigor.

Implications

From a practical standpoint, the paper calls into question the computational justification for employing complex neural models in recommender systems. Despite their proven success in fields such as natural language processing and computer vision, the findings suggest deep learning's current deployment in recommendation tasks may not be leveraging its potential to the fullest.

Theoretically, this leads to reconceptualizing what constitutes significant progress in applied machine learning within recommender systems. As shown, simpler algorithms can often achieve similar or better outcomes—prompting the need for clearer evaluation criteria and improved research practices.

Future Directions

  • Enhanced Reproducibility Practices: The research community should enforce stricter guidelines on sharing code and datasets to ensure reproducibility. Establishing standard frameworks for evaluation like LibRec or MyMediaLite may help streamline comparisons and eliminate methodological biases.
  • Refined Experimental Methodologies: Greater transparency in the selection and optimization of hyperparameters across the board, not just for proposed models but for baselines as well, is crucial. Regular inclusion of simpler competitive baselines should become standard practice to validate findings robustly.
  • Focus on Real-World Impact: Beyond statistical significance, it is essential to measure how algorithm improvements affect user satisfaction and business metrics in live environments. Bridging the gap between offline metrics and online performance perceptions can guide more impactful advancements.

In summary, while deep learning technologies hold promise for advancing recommender systems, this paper cautions against presuming breakthrough progress without rigorous validation. As the field evolves, emphasizing reproducibility, fair baseline comparisons, and real-world efficacy may forge paths to genuine innovation.