Papers
Topics
Authors
Recent
Search
2000 character limit reached

Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance

Published 14 Feb 2010 in cs.DL and physics.soc-ph | (1002.2769v1)

Abstract: The Center for Science and Technology Studies at Leiden University advocates the use of specific normalizations for assessing research performance with reference to a world average. The Journal Citation Score (JCS) and Field Citation Score (FCS) are averaged for the research group or individual researcher under study, and then these values are used as denominators of the (mean) Citations per publication (CPP). Thus, this normalization is based on dividing two averages. This procedure only generates a legitimate indicator in the case of underlying normal distributions. Given the skewed distributions under study, one should average the observed versus expected values which are to be divided first for each publication. We show the effects of the Leiden normalization for a recent evaluation where we happened to have access to the underlying data.

Citations (189)

Summary

Evaluation of the CWTS ("Leiden") Normalizations for Research Performance

The paper "Caveats for the journal and field normalizations in the CWTS ('Leiden') evaluations of research performance" presents a critical analysis of the current normalization methodologies employed by the Center for Science and Technology Studies (CWTS) at Leiden University. The research conducted by Tobias Opthof and Loet Leydesdorff raises significant concerns regarding the validity of the Journal and Field Citation Scores (JCS and FCS) as utilized in Leiden's 'crown indicator,' which assesses research performance with respect to a world average.

Methodological Concerns

The core issue identified in the paper pertains to the procedure of dividing averaged citation scores rather than averaging individual citation ratios, which is mathematically appropriate only under the assumption of normal distributions in the data set. Given that citation distributions are markedly skewed, Opthof and Leydesdorff argue that normalizations should be based on the ratio of observed to expected values for each publication prior to the calculation of averages. This approach respects the order of operations in arithmetic, ensuring that each individual publication holds an equal weight in the determination of a research group’s performance score.

Moreover, the authors highlight problems with field normalization due to overlapping and misclassified ISI subject categories. These categorizations were not originally devised for analytical purposes but rather for information retrieval, thereby questioning their reliability in evaluating research outputs across various fields.

Numerical Findings and Case Studies

The empirical validation of the authors' proposed methodology reflects significant deviations from the CWTS normalization results. In an illustrative example, the authors demonstrate that their method yields higher and more nuanced performance scores than Leiden's approach, revealing potential disparities in ranking and misrepresentation of actual research impact. Notably, the reliance on the CWTS method for managerial and policy decisions can lead to substantial undervaluation of low-ranked scientists, raising ethical and professional implications regarding resource allocation based on these normalized metrics.

Implications and Future Directions

The findings presented in this paper carry profound implications for the application of bibliometric evaluations in research management. Specifically, they urge caution against using the 'Leiden' indicators for critical decisions without transparency and understanding of the underlying data and methodology. The authors advocate for alternative, more robust approaches, such as z-score normalization or non-parametric statistical methods, to accommodate the inherent skewness in citation data.

Looking forward, the potential integration of discipline-oriented databases and hierarchical indexing, such as the MeSH terms or Chemical Abstracts, could enhance the accuracy and applicability of these evaluations in specialized research areas. Additionally, the development and access to more transparent and reproducible metrics are essential for informed science policy and management.

In conclusion, while the CWTS normalization method has become a popular standard, its potential for misrepresentation suggests a pressing need for reform and methodological refinement. By adopting more scientifically rigorous normalization strategies, stakeholders can ensure that research evaluations reflect true scholarly contributions and support informed decision-making at all levels of academic governance.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.