Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discriminating abilities of threshold-free evaluation metrics in link prediction (2205.04615v3)

Published 10 May 2022 in physics.data-an

Abstract: Link prediction is a paradigmatic and challenging problem in network science, which attempts to uncover missing links or predict future links, based on known topology. A fundamental but still unsolved issue is how to choose proper metrics to fairly evaluate prediction algorithms. The area under the receiver operating characteristic curve (AUC) and the balanced precision (BP) are the two most popular metrics in early studies, while their effectiveness is recently under debate. At the same time, the area under the precision-recall curve (AUPR) becomes increasingly popular, especially in biological studies. Based on a toy model with tunable noise and predictability, we propose a method to measure the discriminating abilities of any given metric. We apply this method to the above three threshold-free metrics, showing that AUC and AUPR are remarkably more discriminating than BP, and AUC is slightly more discriminating than AUPR. The result suggests that it is better to simultaneously use AUC and AUPR in evaluating link prediction algorithms, at the same time, it warns us that the evaluation based only on BP may be unauthentic. This article provides a starting point towards a comprehensive picture about effectiveness of evaluation metrics for link prediction and other classification problems.

Summary

We haven't generated a summary for this paper yet.