Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a new crown indicator: An empirical analysis (1004.1632v2)

Published 9 Apr 2010 in cs.DL and physics.soc-ph

Abstract: We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is planning to adopt. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.

Citations (221)

Summary

  • The paper empirically compares two citation normalization mechanisms, CPP/FCSm and MNCS, for evaluating research performance across different aggregation levels (groups, institutions, countries, journals).
  • Key findings show strong agreement between the two indicators at higher aggregation levels but more pronounced differences at lower levels, highlighting sensitivity to aggregation and recent publications.
  • The analysis supports the adoption of the MNCS for its theoretical properties but recommends considering the exclusion of recent publications to reduce noise and improve robustness, especially at finer resolutions.

Empirical Analysis on Citation-based Normalization Mechanisms

The paper "Towards a new crown indicator: An empirical analysis" by Ludo Waltman et al. provides a comprehensive examination of two normalization mechanisms used in evaluating research performance through citation-based indicators. These mechanisms constitute central components in the operationalization of the current and new crown indicators developed by the Centre for Science and Technology Studies (CWTS) at Leiden University.

The existing crown indicator, known as the CPP/FCSm (citations per publication/mean field citation score), has been widely used for assessing research performance by normalizing citation counts according to the field and year of publication. The proposed new crown indicator, termed the MNCS (mean normalized citation score), adopts an alternative mechanism wherein individual citation ratios relative to expected citations are averaged, thereby conferring equal weight to all publications regardless of their field's anticipated citation propensity.

Methodology and Data Sets

This paper empirically compares these two indicators across four aggregation levels: research groups, research institutions, countries, and journals. The analysis utilizes extensive bibliometric data drawn from the Web of Science, considering different subject categories to exemplify variations in citation patterns across fields. The selected data sets encompass various scientific disciplines and scales, ranging from national levels to specific research groups. Notably, the MNCS indicator is differentiated into MNCS1 and MNCS2 variants, the latter excluding publications less than one year post-publication to minimize the effect of citation noise contributed by recent publications.

Key Findings

The paper's findings lead to several observations:

  1. Aggregation Levels: At higher aggregation levels, such as countries and large research institutions, both the CPP/FCSm and the MNCS indicators show strong linear and monotonic correlations, suggesting minimal differences in their application outcomes. In contrast, at lower aggregation levels, such as individual research groups or journals, variations between the two indicators become more pronounced.
  2. Recent Publications: The treatment of recent publications reveals a potential drawback of the MNCS indicator, as recent works—despite low citation counts—are weighted equally with older publications, potentially introducing noise. The exclusion of recent publications (MNCS2) often brings closer alignment with CPP/FCSm scores, especially at finer aggregation resolutions.
  3. Field Variability: Divergences in field-specific citation behaviors necessitate rigorous normalization techniques to ensure fair performance assessment. The paper underscores the importance of adjusting for temporal and field-specific citation tendencies in bibliometrics.

Implications

The transition by CWTS to employ the MNCS indicator derives from its superior theoretical properties, particularly its consistency and equal weighting of publications. However, the empirical analysis advocates for careful consideration of the noise effect from newly published works, recommending the temporary exclusion of these publications in some contexts to enhance indicator robustness.

For institutions and policymakers, adopting a normalized citation score that equally weighs all publications offers a less biased reflection of research output across varying disciplinary landscapes. It also responds to community calls for bibliometric tools that better capture individual contributions without disproportionate bias towards historically high-citation fields.

Future Directions

The results highlight an avenue for further exploration in refining bibliometric indicators. Inquiry could focus on refining publication exclusions criteria, developing predictive models for long-term citation impacts, or integrating document-type considerations. The overarching aim remains to achieve a balanced, equitable metric for research evaluation that can be seamlessly applied across varied scientific domains and institutional contexts.

In conclusion, this paper provides substantial empirical evidence to guide the evolution of citation-based performance metrics, emphasizing the nuanced differences between current practices and emerging methodologies. The insights drawn hold tangible potential to inform both theoretical developments and practical implementations within the domain of bibliometrics.