Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Popular and/or Prestigious? Measures of Scholarly Esteem (1012.4871v1)

Published 22 Dec 2010 in cs.DL

Abstract: Citation analysis does not generally take the quality of citations into account: all citations are weighted equally irrespective of source. However, a scholar may be highly cited but not highly regarded: popularity and prestige are not identical measures of esteem. In this study we define popularity as the number of times an author is cited and prestige as the number of times an author is cited by highly cited papers. Information Retrieval (IR) is the test field. We compare the 40 leading researchers in terms of their popularity and prestige over time. Some authors are ranked high on prestige but not on popularity, while others are ranked high on popularity but not on prestige. We also relate measures of popularity and prestige to date of Ph.D. award, number of key publications, organizational affiliation, receipt of prizes/honors, and gender.

Citations (255)

Summary

  • The paper distinguishes between popularity and prestige by comparing total citations with citations from top-tier works.
  • It employs a phased analysis of IR data from 1956 to 2008 to reveal shifts in researcher rankings and the stable influence of prestigious authors.
  • Findings show that prestige correlates more with distinguished awards than mere popularity, suggesting a need for quality-weighted citation metrics in evaluations.

Scholarly Popularity and Prestige: A Thorough Exploration of Citation Analysis

The paper "Popular and/or Prestigious? Measures of Scholarly Esteem" by Ying Ding and Blaise Cronin seeks to demarcate the boundaries between popularity and prestige within the field of scholarly citation analysis. While traditional citation metrics typically equalize the weight of all citations, this research distinguishes between sheer citation volume (popularity) and citations emanating from highly-cited works (prestige). The authors pilot this investigation in the field of Information Retrieval (IR), an interdisciplinary area straddling multiple knowledge domains.

Methodological Overview

The researchers collected data from the Web of Science, focusing on IR-related works from the periods 1956-2008. They sorted the data into four phases: 1956-1980, 1981-1990, 1991-2000, and 2001-2008. Popularity was quantified by counting all citations received, while prestige was calculated by counting references from the top 20% of highly-cited papers in each timeframe.

Analytical Findings

The investigation presents a dynamism in scholarly rankings over the years, affirming that only a handful of researchers remained consistently popular throughout the paper period. On the contrary, prestige rankings appeared more stable, with ten authors maintaining top-40 status over all four phases.

Analyzing the correlation between citation metrics and professional accolades, the authors discovered that scholars with higher prestige were more likely to receive distinguished awards in their field, such as the Gerard Salton Award. This underscores the notion that prestige could be a more reflective measure of scholarly impact than popularity.

An intriguing aspect is the decoupling of the prestige and popularity measures. The correlation between popularity and impact factor was robust (r=0.939), whereas prestige correlated weakly with popularity (r=0.563), suggesting that many impactful works might not necessarily accumulate high citation counts indiscriminately across the publication landscape.

Implications and Future Directions

The paper compels us to reconsider how citation metrics are evaluated and used within academic settings. For practical applications, the findings suggest the necessity of including quality-weighted citation metrics in promotions and funding decisions to better assess scholarly influence and esteem. Theoretically, this paper calls for a nuanced appreciation of citation data, advancing the methodological discourse on research impact assessment.

In contemplating future developments, the researchers intend to refine these metrics using graph-theoretic models like PageRank and HITS to more effectively capture the structural influences within citation and co-authorship networks. These models acknowledge indirect influences, paralleling web-based strategies and positing a comprehensive view of scholarly interaction and impact.

Conclusion

In concluding, Ding and Cronin’s paper provides a discernible shift towards understanding the dual facets of scholarly engagement: popularity and prestige. By adopting a methodological sophistication that transcends simple citation counts, their work contributes significantly to the bibliometric toolbox, enhancing our ability to gauge scholarly impact with an appreciation for the complexity and nuance inherent in academic communication.