Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation (1202.3941v1)

Published 17 Feb 2012 in cs.DL

Abstract: The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking, and a number of limitations of the ranking are pointed out.

Citations (414)

Summary

  • The paper introduces a robust bibliometric framework that focuses solely on scientometric data to evaluate university research performance.
  • It details innovative indicators such as the PP top 10% metric and fractional counting to reduce biases inherent in traditional full counting methods.
  • The study contrasts its methodology with ARWU and THE rankings, highlighting the benefits of using objective Web of Science data.

Detailed Analysis of the Leiden Ranking 2011/2012

The paper "The Leiden Ranking 2011/2012: Data Collection, Indicators, and Interpretation" by Waltman et al. systematically examines the Leiden Ranking 2011/2012, a bibliometric-based evaluation of universities. The authors delve into the methodology of data collection, the indicators used, and the interpretation within this specific ranking framework, contrasting it with other prevalent global university rankings like the ARWU and THE Rankings.

Methodological Framework

The Leiden Ranking distinguishes itself by focusing solely on scientometric data to assess the research performance of universities, disregarding other performance dimensions such as education quality. This approach addresses a key methodological concern with other rankings that amalgamate disparate performance dimensions into a single composite metric. The Leiden Ranking relies on data pulled directly from the Web of Science database, deliberately avoiding reliance on institution-supplied information, which might be prone to manipulation or inconsistencies due to lack of standardized definitions.

Innovations in Leiden Ranking 2011/2012

This edition brings significant methodological advancements that influence more accurate assessment in bibliometric analysis:

  1. PP Top 10% Indicator: A noteworthy addition is the PP top 10% indicator, which evaluates the proportion of a university’s publications within the top 10% of most cited works, thus safeguarding against disproportionate influence of outlier publications.
  2. Fractional Counting Method: The shift to fractional counting instead of full counting of collaborative publications enhances the fairness and accuracy in comparing universities, effectively mitigating biases towards institutions with extensive co-authorship networks.
  3. Language Considerations: By allowing for exclusion of non-English publications, this ranking aligns with the global research community’s natural bias towards English-language publications, thus minimizing systematic disadvantages faced by non-English publications within citation metrics.
  4. Stability Intervals: These intervals indicate an indicator's sensitivity to fluctuations over its publication dataset, reducing overreliance on potentially volatile point estimates.

Comparative Analysis with Other Rankings

The paper critically contrasts the methodology of Leiden Ranking with ARWU and THE Rankings:

  • ARWU Criticism: The ARWU’s aggregation of diverse performance criteria into a single metric has drawn criticism for its arbitrary weighting, potential biases due to its focus on Nobel laureates, and its use of self-supplied data from institutions that can introduce inconsistencies. The Leiden Ranking avoids these pitfalls by maintaining a narrow focus on bibliometric indicators and using normalized indicators for differing scientific fields.
  • THE Ranking Concerns: Similar to ARWU, THE Rankings deploy a broad metric scope, including reputational surveys, which can result in cyclical bias wherein reputation drives rankings and vice versa. Furthermore, the reliance on university-reported data poses issues of validity. The Leiden Ranking’s reliance on objective bibliometric data from the Web of Science mitigates these issues.

Implications and Future Perspectives

The Leiden Ranking’s methodical focus on scientific outputs highlights its confines—primarily its applicability limits to research assessment rather than educational quality or multifaceted institutional performance. This approach incites discussions around the necessity for a multidimensional ranking that offers a discipline-specific breakdown, perhaps integrating teaching metrics for holistic evaluations.

In ongoing developments, enhancements are projected towards expanding the number and types of institutions included, refining disciplinary breakdowns, and enriching the robustness of bibliometric indicators, such as better field normalization procedures and introducing industry collaboration metrics.

The paper provides a clear introspection into the methodological rigor applied in structuring the Leiden Ranking and its implications within the academic and research communities. Moreover, it surfaces crucial discussions around methodological transparency and the evolving landscape within which universities are globally assessed.