Mitigation Procedures to Rank Experts through Information Retrieval Measures (1603.04953v1)
Abstract: In order to find experts, different approaches build rankings of people, assuming that they are ranked by level of expertise, and use typical Information Retrieval (IR) measures to evaluate their effectiveness. However, we figured out that expert rankings (i) tend to be partially ordered, (ii) incomplete, and (iii) consequently provide more an order rather than absolute ranks, which is not what usual IR measures exploit. To improve this state of the art, we propose to revise the formalism used in IR to design proper measures for comparing expert rankings. In this report, we investigate a first step by providing mitigation procedures for the three issues, and we analyse IR measures with the help of these procedures to identify interesting revisions and remaining limitations. From this analysis, we see that most of the measures can be exploited for this more generic context because of our mitigation procedures. Moreover, measures based on precision and recall, usually unable to consider the order of the ranked items, are of first interest if we represent a ranking as a set of ordered pairs. Cumulative measures, on the other hand, are specifically designed for considering the order but suffer from a higher complexity, motivating the use of precision/recall measures with the right representation.