Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Researchers Could Obtain Quick and Cheap User Feedback on their Algorithms Without Having to Operate their Own Recommender System (2212.07177v1)

Published 14 Dec 2022 in cs.IR

Abstract: The majority of recommendation algorithms are evaluated on the basis of historic benchmark datasets. Evaluation on historic benchmark datasets is quick and cheap to conduct, yet excludes the viewpoint of users who actually consume recommendations. User feedback is seldom collected, since it requires access to an operational recommender system. Establishing and maintaining an operational recommender system imposes a timely and financial burden that a majority of researchers cannot shoulder. We aim to reduce this burden in order to promote widespread user-centric evaluations of recommendation algorithms, in particular for novice researchers in the field. We present work in progress on an evaluation tool that implements a novel paradigm that enables user-centric evaluations of recommendation algorithms without access to an operational recommender system. Finally, we sketch the experiments we plan to conduct with the help of the evaluation tool.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tobias Eichinger (7 papers)
  2. Ananta Lamichhane (1 paper)

Summary

We haven't generated a summary for this paper yet.