Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measuring "Why" in Recommender Systems: a Comprehensive Survey on the Evaluation of Explainable Recommendation (2202.06466v1)

Published 14 Feb 2022 in cs.IR and cs.AI

Abstract: Explainable recommendation has shown its great advantages for improving recommendation persuasiveness, user satisfaction, system transparency, among others. A fundamental problem of explainable recommendation is how to evaluate the explanations. In the past few years, various evaluation strategies have been proposed. However, they are scattered in different papers, and there lacks a systematic and detailed comparison between them. To bridge this gap, in this paper, we comprehensively review the previous work, and provide different taxonomies for them according to the evaluation perspectives and evaluation methods. Beyond summarizing the previous work, we also analyze the (dis)advantages of existing evaluation methods and provide a series of guidelines on how to select them. The contents of this survey are based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI, and their complete summarization are presented at https://shimo.im/sheets/VKrpYTcwVH6KXgdy/MODOC/. With this survey, we finally aim to provide a clear and comprehensive review on the evaluation of explainable recommendation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xu Chen (415 papers)
  2. Yongfeng Zhang (163 papers)
  3. Ji-Rong Wen (299 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.