Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unbiased evaluation of ranking metrics reveals consistent performance in science and technology citation data (2001.05414v1)

Published 15 Jan 2020 in cs.SI, cs.DL, cs.IR, and physics.data-an

Abstract: Despite the increasing use of citation-based metrics for research evaluation purposes, we do not know yet which metrics best deliver on their promise to gauge the significance of a scientific paper or a patent. We assess 17 network-based metrics by their ability to identify milestone papers and patents in three large citation datasets. We find that traditional information-retrieval evaluation metrics are strongly affected by the interplay between the age distribution of the milestone items and age biases of the evaluated metrics. Outcomes of these metrics are therefore not representative of the metrics' ranking ability. We argue in favor of a modified evaluation procedure that explicitly penalizes biased metrics and allows us to reveal metrics' performance patterns that are consistent across the datasets. PageRank and LeaderRank turn out to be the best-performing ranking metrics when their age bias is suppressed by a simple transformation of the scores that they produce, whereas other popular metrics, including citation count, HITS and Collective Influence, produce significantly worse ranking results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shuqi Xu (11 papers)
  2. Manuel Sebastian Mariani (22 papers)
  3. Linyuan Lü (68 papers)
  4. Matúš Medo (10 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.