Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantitative Methods in Research Evaluation Citation Indicators, Altmetrics, and Artificial Intelligence (2407.00135v2)

Published 28 Jun 2024 in cs.DL

Abstract: This book critically analyses the value of citation data, altmetrics, and artificial intelligence to support the research evaluation of articles, scholars, departments, universities, countries, and funders. It introduces and discusses indicators that can support research evaluation and analyses their strengths and weaknesses as well as the generic strengths and weaknesses of the use of indicators for research assessment. The book includes evidence of the comparative value of citations and altmetrics in all broad academic fields primarily through comparisons against article level human expert judgements from the UK Research Excellence Framework 2021. It also discusses the potential applications of traditional artificial intelligence and LLMs for research evaluation, with large scale evidence for the former. The book concludes that citation data can be informative and helpful in some research fields for some research evaluation purposes but that indicators are never accurate enough to be described as research quality measures. It also argues that AI may be helpful in limited circumstances for some types of research evaluation.

Summary

  • The paper presents a comprehensive analysis of citation indicators, altmetrics, and AI methods to evaluate research quality.
  • It uses empirical evidence from REF 2021 data to highlight the variability of citation impacts across disciplines.
  • The study recommends integrating quantitative metrics with qualitative assessments for responsible and ethical research evaluation.

Quantitative Methods in Research Evaluation: An Analysis

Overview

The book "Quantitative Methods in Research Evaluation: Citation Indicators, Altmetrics, and Artificial Intelligence" by Mike Thelwall, from the University of Sheffield, provides an exhaustive analysis of the tools and methodologies used in evaluating research outputs. The discussion spans across traditional citation indicators, altmetrics, and the emerging role of artificial intelligence in research evaluation. The book demarcates itself by tackling the intricacies and challenges involved in using these indicators effectively and responsibly.

Citation-Based Indicators

Theoretical Considerations

The theoretical discussion in the book underscores the multifaceted nature of citations. Merton’s normative theory of citations is critiqued for its simplistic assumption that more cited work is inherently more valuable. The theoretical framework highlights the inherent biases and limitations of citation-based indicators: the variability of reference lists, differing citation practices across fields, and non-scholarly citations. Such insights are crucial in understanding the limitations of using citations as a proxy for research quality.

Practical Considerations

The book provides a thorough examination of the practical issues surrounding citation-based indicators. Factors such as disciplinary differences, document type variations, time-window effects, and database discrepancies are discussed at length. These practical considerations are critical in ensuring researchers use citation-based indicators judiciously, acknowledging their limitations.

Empirical Evidence

The empirical sections of the book provide robust evidence to validate the theoretical discussions. Notably, the analysis of the UK Research Excellence Framework (REF) 2021 data shows a significant yet varied correlation between citation counts and research quality across different fields. This evidence suggests that while citation counts can be indicative of research quality in some fields—particularly in health and physical sciences—they are less reliable in others, such as the arts and humanities.

Altmetrics

The text expands into altmetrics, underscoring their potential in capturing wider impacts of research, including societal and educational impacts. However, altmetrics are critiqued for their heightened susceptibility to manipulation and non-scholarly attention, raising questions about their validity as standalone indicators of research quality.

Artificial Intelligence in Research Evaluation

The exploration of AI methods in estimating research quality is a compelling addition. The results indicate that AI-based models can provide reasonably accurate predictions of research quality in certain contexts, albeit with cautionary notes on potential biases and the limitations of the training data used.

Policy Implications and Ethical Considerations

The book advocates for responsible use of indicators in research evaluation. It aligns with initiatives like DORA and the Leiden Manifesto, emphasizing that quantitative indicators should support, not replace, expert judgment. It also stresses the importance of using open data for transparency and global fairness. Moreover, the book addresses the perverse incentives and gaming behaviors that can arise from misuse of indicators, reinforcing the need for ethical practices in research assessment.

Practical Recommendations

The text provides actionable recommendations for different stakeholders in the research ecosystem:

  1. Researchers: Encouraged to understand and communicate the limitations of citation metrics and to be cautious about using them as sole indicators of research quality.
  2. Evaluators: Recommended to integrate quantitative indicators with qualitative assessments, maintaining a nuanced view of research impact and quality.
  3. Policy Makers: Urged to design evaluation frameworks that are adaptive to field-specific dynamics and inclusive of diverse research outputs.

Conclusion

Thelwall’s book is an essential read for experienced researchers and policy-makers involved in research evaluation. It offers a balanced, well-researched, and nuanced perspective on the role of quantitative methods in research assessment, highlighting both their utility and pitfalls. The underlying message is clear: while citation indicators, altmetrics, and AI hold promise, they must be applied judiciously and ethically to truly reflect the multifaceted nature of research quality and impact.

Youtube Logo Streamline Icon: https://streamlinehq.com