Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Over-Optimization of Academic Publishing Metrics: Observing Goodhart's Law in Action (1809.07841v1)

Published 20 Sep 2018 in cs.SI, cs.CY, and physics.soc-ph

Abstract: The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart's Law, according to which "when a measure becomes a target, it ceases to be a good measure." In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal's impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success.

Citations (207)

Summary

  • The paper demonstrates that over-optimization of academic metrics, influenced by Goodhart's law, undermines the reliability of citation counts and impact factors.
  • It analyzes trends such as explosive publication volume, self-citation practices, and concentrated authorship that distort traditional measures of research quality.
  • The study calls for the development of transparent, alternative metrics to accurately assess academic impact across diverse research fields.

Observing Goodhart's Law in Academic Publishing Metrics

The paper "Over-Optimization of Academic Publishing Metrics: Observing Goodhart's Law in Action" by Michael Fire and Carlos Guestrin provides a comprehensive analysis of the evolving landscape of academic publishing. It emphasizes how traditional metrics used to assess academic success, such as the number of publications, citations, and impact factor, have been subject to manipulation and now often fail to accurately measure true academic impact.

Key Findings and Claims

The authors analyzed over 120 million papers, highlighting significant changes in academic publishing patterns over the past century. A prevailing theme in their findings is the influence of Goodhart's Law, which posits that when a measure becomes a target, it loses its value as a good measure. They argue that various citation-based metrics have been compromised, noting several key points:

  1. Increase in Publication Volume: The dramatic surge in the number of publications, from about 174,000 in 1950 to over 7 million in 2014, has rendered sheer quantity an inadequate indicator of academic merit. This is exacerbated by a trend towards shorter papers with longer author lists, resulting in more publications per researcher.
  2. Manipulation of Citation Metrics: The paper provides evidence that metrics like the h-index and citation counts are affected by factors such as increased self-citation and extensive reference lists, leading to inflated measures of impact.
  3. Variations Across Research Fields: The paper includes an analysis of over 2,600 research fields, revealing significant disparities in citation practices and metric reliability across domains, suggesting that citation-based metrics are inappropriate for cross-field comparisons.
  4. Over-Optimization of Journal Impact Factors: There is a notable concern about top journals publishing a large number of papers from the same pool of authors, likely due to competitive pressures to maintain high impact factors. This could lead to a closed network that limits the diversity and novelty of published research.

Implications for Academic Measurement

The implications of these findings are profound for both practical and theoretical aspects of academic evaluation. The paper suggests that current metrics are driving adverse behaviors among academics—such as salami slicing, ghost authorship, and other unethical practices—further supporting the argument that these metrics are poor proxies for research quality.

The paper challenges the academic community to reconsider its reliance on traditional measures of impact. It proposes a need for new evaluation metrics that better reflect the quality and influence of research rather than its quantity. The development of such metrics could be informed by more nuanced data science tools and comprehensive datasets, avoiding pitfalls that could lead them to become new targets under Goodhart's Law.

Future Directions

The research encourages a shift in how success is assessed, advocating for measures that account for diversity within and across disciplines. The authors propose open and transparent review processes as a potential avenue for reform. In the future, integrating alternative metrics—such as those based on the societal impact of research, multidisciplinary collaborations, and open-access dissemination—might offer more balanced assessments.

The trajectory of academic publishing will likely further integrate digital and data-driven approaches for evaluation, allowing for continuous refinement of how academic impact is understood and measured. Collaboration between scientometricians, researchers, and policy-makers will be essential in this evolving landscape to ensure that new metrics foster genuine academic advancement rather than distort it through optimization behaviors.

In conclusion, the paper by Fire and Guestrin marks an important contribution to the discussion on academic publishing metrics, calling for a thoughtful reassessment of the tools used to gauge academic success and their adaptation to the rapidly changing publishing environment.