Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus
The paper, "Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus," authored by Éric Archambault and colleagues, provides an analytical comparison of two predominant bibliometric databases: the Web of Science (WoS) and Scopus. Historically, WoS, originally part of the Institute for Scientific Information (ISI) and now under Thomson Reuters, was the sole major source of bibliometric data until the introduction of Scopus by Reed Elsevier in 2004. This paper addresses the critical issue of comparability and stability of bibliometric statistics derived from these databases, utilizing macro-level indicators to evaluate the number of papers and citations on a country basis.
Objectives and Methodology
The paper seeks to ascertain whether the bibliometric data from WoS and Scopus are comparable in evaluating scientific output at the national level. It primarily focuses on two metrics: the number of publications and the citation counts of these publications across various scientific fields and countries. The paper covers a comparative period from 1996 to 2007, leveraging data transformed into relational databases on Microsoft SQL Server for rigorous analysis.
Results
The correlations between WoS and Scopus are notably high:
- The correlation coefficient for the number of papers and citations by country is above 0.99, indicating negligible discrepancies between the databases.
- The paper reveals that while minor ranking shifts exist—for example, certain Asian countries like Japan and China gain two ranks in Scopus—the top 25 countries remain consistent across both databases.
- When data is further segmented into scientific fields such as physics, chemistry, biology, and engineering, correlations remain robust. Even in emerging fields like nanotechnology, the correlation remains strong at 0.991 for papers and 0.967 for citations.
Discussion
These findings suggest that both WoS and Scopus provide reliable and consistent data for country-level bibliometric analyses. The strong correlation underscores the databases' robustness as tools for evaluating macro-level scientific outputs. The paper implies that despite differences in document types and indexing policies between the databases, the measures of scientific production are stable.
Implications and Future Directions
The results affirm the validity of using bibliometrics for large-scale analyses of scientific output. From a practical standpoint, researchers and policymakers can confidently rely on either database for assessments at a national or macro level, although the paper advises caution in fields with nuanced publication types or at more granular institutional levels.
Theoretically, this work contributes to the broader discourse on scientometric methodologies, providing evidence for consistency in large datasets. Future research could expand upon these findings by focusing on disciplinary subfields where document categorization discrepancies might impact results more significantly. Additionally, examining the applicability of these results in the social sciences and humanities could further validate the generalizability of these databases.
In sum, Archambault et al.'s work demonstrates the credible alignment between WoS and Scopus, ensuring that the bibliometric analyses based on these sources can be conducted with a high degree of confidence. The consistency across different scientific domains reinforces the applicability of these databases in a wide range of research environments.