Evaluation scores and dataset bias in salient object detection benchmarking
Develop evaluation measures and benchmarking protocols for salient object detection that address and mitigate dataset bias (including center bias and annotation subjectivity) and provide reliable, comparable scoring across models and datasets. Establish metrics that better reflect segmentation quality and model behavior than current PR/ROC/AUC/F-Measure variants when datasets have inherent biases.
References
Finally, we propose probable solutions for tackling several open problems such as evaluation scores and dataset bias, which also suggest future research directions in the rapidly-growing field of salient object detection.
— Salient Object Detection: A Benchmark
(1501.02741 - Borji et al., 2015) in Abstract