- The paper analyzes shortcomings of the original SNIP journal impact indicator, noting counterintuitive properties and issues with journal mergers.
- It proposes a revised SNIP incorporating three key modifications: using harmonic means, accounting for publications with active references, and merging DCP/RDCP calculations.
- Empirical tests show the revised SNIP yields modest differences but systematic shifts, particularly in fields like computer science, offering a more robust tool for journal impact assessment.
Analysis of Modifications to the SNIP Journal Impact Indicator
This paper, authored by Ludo Waltman et al., from the Centre for Science and Technology Studies at Leiden University, presents a detailed discussion on the modifications to the Source Normalized Impact per Paper (SNIP) indicator. Originating from the Scopus database, the SNIP indicator, introduced by Moed, aims to measure the citation impact of scientific journals by accounting for differences in citation practices across fields without explicitly defined boundaries. Such normalization is critical given the varied citation densities across scientific domains.
The authors identify counterintuitive properties in the original SNIP indicator and propose a revised version that addresses these. The fundamental issue they point out is that the original SNIP could paradoxically decrease when additional citations are received under specific scenarios, particularly when citing publications have extensive reference lists. Further, they critique its behavior in journal mergers, where the resultant SNIP can unreasonably drop below that of the individual pre-merger journals.
The revised SNIP introduces three significant modifications: (1) leveraging harmonic means over arithmetic means for DCP (Database Citation Potential) calculations, (2) incorporating the fraction of publications with at least one active reference when computing DCP values, and (3) discarding the distinction between DCP and RDCP (Relative Database Citation Potential). These changes meld both citing journal characteristics and individual publication features, thus enhancing the field normalization process.
Empirical evidence indicates the revised SNIP indicator results in modest differences from the original SNIP values, albeit with systematic shifts particularly observable in disciplines like computer science and engineering. In these fields, the revised SNIP sees a relative decrease in impact indication compared to the original SNIP, suggesting a nuanced re-evaluation of citation influence.
The implications of these findings are twofold. Practically, they offer a more robust evaluation tool for journal impact assessment, reducing the bias previously skewed by citation list lengths and non-standard journal references. Theoretically, this evolves the understanding of source normalization, suggesting a more comprehensive integration of weighting mechanisms beyond mere citation counts.
Looking to the future, further developments may involve addressing the revised SNIP's sensitivity to citation outliers, a notorious issue in average-based metrics, as seen with journals like Acta Crystallographica Section A achieving high SNIP values due to isolated high-citation events. Additionally, the intrinsic limitations of source normalized metrics, such as handling unbalanced between-field citation flows and growth rate disparities, remain areas warranting further exploration and refinement.
In summary, Waltman et al.'s work provides an elaborate reconsideration and recalibration of the SNIP indicator, seeking to harmonize field normalization more effectively. This revised indicator presents a promising avenue for a refined understanding and application within bibliometric analyses, all while underscoring the continued evolution of citation impact assessment methodologies.