Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Some modifications to the SNIP journal impact indicator (1209.0785v1)

Published 4 Sep 2012 in cs.DL

Abstract: The SNIP (source normalized impact per paper) indicator is an indicator of the citation impact of scientific journals. The indicator, introduced by Henk Moed in 2010, is included in Elsevier's Scopus database. The SNIP indicator uses a source normalized approach to correct for differences in citation practices between scientific fields. The strength of this approach is that it does not require a field classification system in which the boundaries of fields are explicitly defined. In this paper, a number of modifications that will be made to the SNIP indicator are explained, and the advantages of the resulting revised SNIP indicator are pointed out. It is argued that the original SNIP indicator has some counterintuitive properties, and it is shown mathematically that the revised SNIP indicator does not have these properties. Empirically, the differences between the original SNIP indicator and the revised one turn out to be relatively small, although some systematic differences can be observed. Relations with other source normalized indicators proposed in the literature are discussed as well.

Citations (177)

Summary

  • The paper analyzes shortcomings of the original SNIP journal impact indicator, noting counterintuitive properties and issues with journal mergers.
  • It proposes a revised SNIP incorporating three key modifications: using harmonic means, accounting for publications with active references, and merging DCP/RDCP calculations.
  • Empirical tests show the revised SNIP yields modest differences but systematic shifts, particularly in fields like computer science, offering a more robust tool for journal impact assessment.

Analysis of Modifications to the SNIP Journal Impact Indicator

This paper, authored by Ludo Waltman et al., from the Centre for Science and Technology Studies at Leiden University, presents a detailed discussion on the modifications to the Source Normalized Impact per Paper (SNIP) indicator. Originating from the Scopus database, the SNIP indicator, introduced by Moed, aims to measure the citation impact of scientific journals by accounting for differences in citation practices across fields without explicitly defined boundaries. Such normalization is critical given the varied citation densities across scientific domains.

The authors identify counterintuitive properties in the original SNIP indicator and propose a revised version that addresses these. The fundamental issue they point out is that the original SNIP could paradoxically decrease when additional citations are received under specific scenarios, particularly when citing publications have extensive reference lists. Further, they critique its behavior in journal mergers, where the resultant SNIP can unreasonably drop below that of the individual pre-merger journals.

The revised SNIP introduces three significant modifications: (1) leveraging harmonic means over arithmetic means for DCP (Database Citation Potential) calculations, (2) incorporating the fraction of publications with at least one active reference when computing DCP values, and (3) discarding the distinction between DCP and RDCP (Relative Database Citation Potential). These changes meld both citing journal characteristics and individual publication features, thus enhancing the field normalization process.

Empirical evidence indicates the revised SNIP indicator results in modest differences from the original SNIP values, albeit with systematic shifts particularly observable in disciplines like computer science and engineering. In these fields, the revised SNIP sees a relative decrease in impact indication compared to the original SNIP, suggesting a nuanced re-evaluation of citation influence.

The implications of these findings are twofold. Practically, they offer a more robust evaluation tool for journal impact assessment, reducing the bias previously skewed by citation list lengths and non-standard journal references. Theoretically, this evolves the understanding of source normalization, suggesting a more comprehensive integration of weighting mechanisms beyond mere citation counts.

Looking to the future, further developments may involve addressing the revised SNIP's sensitivity to citation outliers, a notorious issue in average-based metrics, as seen with journals like Acta Crystallographica Section A achieving high SNIP values due to isolated high-citation events. Additionally, the intrinsic limitations of source normalized metrics, such as handling unbalanced between-field citation flows and growth rate disparities, remain areas warranting further exploration and refinement.

In summary, Waltman et al.'s work provides an elaborate reconsideration and recalibration of the SNIP indicator, seeking to harmonize field normalization more effectively. This revised indicator presents a promising avenue for a refined understanding and application within bibliometric analyses, all while underscoring the continued evolution of citation impact assessment methodologies.