- The paper exposes the fundamental flaws in bibliometric evaluation and outlines how these vulnerabilities enable fraudulent publishing practices in mathematics.
- It employs a comprehensive analysis of citation networks and case studies, demonstrating the impact of citation cartels, excessive self-citations, and ranking incentives.
- The report warns of emerging risks from AI-generated fraudulent manuscripts and advocates for a shift toward qualitative, expert-driven research assessment.
Systemic Issues and Manipulation in Mathematical Publishing
Introduction
The paper "Fraudulent Publishing in the Mathematical Sciences" (2509.07257) provides a comprehensive analysis of the vulnerabilities and manipulation endemic to the current mathematical publishing ecosystem. The authors, representing a joint working group of the IMU and ICIAM, systematically dissect the interplay between bibliometric-driven research assessment, the proliferation of predatory publishing practices, and the emergence of fraudulent behaviors that threaten the integrity of mathematical research. The report is notable for its detailed examination of the mechanisms by which bibliometric measures are gamed, the structural weaknesses of citation databases, and the impact of these phenomena on both individual and institutional reputations.
Bibliometrics and the Mathematical Sciences
The report foregrounds the unique characteristics of mathematical publishing: low publication and citation rates, small coauthorship networks, and significant heterogeneity across subfields. These features render standard bibliometric measures—such as citation counts, h-index, and journal impact factors—particularly ill-suited for evaluating mathematical research. The authors argue that the small absolute numbers in mathematical citation data amplify the effects of manipulation, making the discipline especially susceptible to gaming.
The paper highlights that the widespread adoption of bibliometric-based evaluation has shifted researcher behavior, incentivizing practices that optimize for metrics rather than substantive scientific contribution. The authors assert that this shift has led to a proliferation of fraudulent activities, including citation cartels, self-citation excess, and the strategic use of predatory journals.
Citation Databases and Their Limitations
A critical analysis is provided of the major citation databases—Clarivate's Web of Science, Elsevier's Scopus, Google Scholar, zbMATH Open, and MathSciNet. The report details the opaque inclusion criteria, lack of curation, and conflicts of interest inherent in commercial databases. The authors note that these databases often index low-quality or predatory journals, further undermining the reliability of bibliometric indicators derived from them.
The exclusion of mathematics from Clarivate's Highly Cited Researchers (HCR) list in 2023 is discussed as a case paper in the failure of citation-based metrics to capture genuine research quality. The authors document that many individuals previously listed as HCRs in mathematics were not recognized as leading figures by the mathematical community, and that their inclusion was largely attributable to manipulative citation practices.
Patterns of Manipulation and Fraud
The report delineates a taxonomy of fraudulent behaviors, ranging from "occasional poor practice" (e.g., salami-slicing, excessive self-citation, reviewer coercion) to "systematic bad practice" (e.g., citation cartels, copy-paste plagiarism, authorship manipulation) and outright "fraudulent behavior" (e.g., paper mills, citation sales, blackmail, identity fraud). The authors provide concrete examples and reference high-profile cases across multiple countries, demonstrating the global and cross-disciplinary nature of the problem.
A key empirical finding is the dramatic difference in self-citation and self-referencing rates between HCRs, top-cited mathematicians, and prizewinners. HCRs exhibit self-citation and self-referencing rates more than twice those of the other cohorts, supporting the claim that HCR status in mathematics is a poor proxy for research quality.
The report also addresses the correlation between HCR status and retractions, noting that a nontrivial fraction of HCRs in mathematics have had papers retracted for reasons including paper mill suspicion and plagiarism. The increasing rate of retractions across all sciences is highlighted as evidence of a growing crisis in research integrity.
Institutional Incentives and University Rankings
The authors analyze the role of university rankings—such as the Academic Ranking of World Universities (ARWU, "Shanghai ranking")—in perpetuating fraudulent publishing. Since these rankings rely heavily on bibliometric indicators and HCR counts, institutions are incentivized to engage in or tacitly support manipulative practices to improve their standing. The report asserts that the ease with which affiliations and publication data can be manipulated renders such rankings largely meaningless and actively harmful to the research ecosystem.
The Impact of AI on Fraudulent Publishing
The paper devotes a section to the implications of AI-driven text generation for scientific publishing. While AI tools can assist with copy-editing and translation, their use in generating research content introduces new risks. The authors warn that paper mills are likely to exploit AI to produce plausible but fraudulent manuscripts at scale, complicating detection efforts. The emergence of "tortured phrases" as a marker of AI-generated text is noted, but the authors caution that as generative models improve, such heuristics will become less effective.
Publishers are responding with AI-powered detection tools, but the report anticipates an arms race between fraudsters and detection systems. The authors call for robust guidelines, shared databases of fraudulent works, and transparent editorial protocols to address these challenges.
Implications and Future Directions
The report's analysis has significant implications for research assessment, publishing policy, and the broader scientific community. The authors make several strong claims:
- Bibliometric measures are fundamentally inadequate for evaluating mathematical research and are easily gamed.
- HCR status and similar metrics are unreliable indicators of research quality in mathematics.
- University rankings based on bibliometric data create perverse incentives and are susceptible to manipulation.
- AI will exacerbate existing problems in scientific publishing, increasing the scale and sophistication of fraudulent activity.
The authors advocate for a shift away from quantitative metrics toward qualitative, expert-driven evaluation, and for the development of community-driven standards and detection mechanisms. They emphasize the need for vigilance and ethical commitment at all levels of the research enterprise.
Conclusion
"Fraudulent Publishing in the Mathematical Sciences" provides a rigorous and detailed account of the structural vulnerabilities in mathematical publishing, the mechanisms of bibliometric manipulation, and the emerging threats posed by AI-generated content. The report's empirical findings and policy analysis underscore the urgent need for reform in research assessment and publishing practices. The authors' recommendations, to be detailed in a subsequent publication, are likely to be of central importance for the future integrity of mathematical research and its evaluation.