Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

'Quis custodiet ipsos custodes?' Who will watch the watchmen? On Detecting AI-generated peer-reviews (2410.09770v1)

Published 13 Oct 2024 in cs.CL, cs.AI, cs.DL, and cs.LG
'Quis custodiet ipsos custodes?' Who will watch the watchmen? On Detecting AI-generated peer-reviews

Abstract: The integrity of the peer-review process is vital for maintaining scientific rigor and trust within the academic community. With the steady increase in the usage of LLMs like ChatGPT in academic writing, there is a growing concern that AI-generated texts could compromise scientific publishing, including peer-reviews. Previous works have focused on generic AI-generated text detection or have presented an approach for estimating the fraction of peer-reviews that can be AI-generated. Our focus here is to solve a real-world problem by assisting the editor or chair in determining whether a review is written by ChatGPT or not. To address this, we introduce the Term Frequency (TF) model, which posits that AI often repeats tokens, and the Review Regeneration (RR) model, which is based on the idea that ChatGPT generates similar outputs upon re-prompting. We stress test these detectors against token attack and paraphrasing. Finally, we propose an effective defensive strategy to reduce the effect of paraphrasing on our models. Our findings suggest both our proposed methods perform better than the other AI text detectors. Our RR model is more robust, although our TF model performs better than the RR model without any attacks. We make our code, dataset, and model public.

Detecting AI-Generated Peer Reviews

The paper "Quis custodiet ipsos custodes?' Who will watch the watchmen? On Detecting AI-generated Peer Reviews" explores the significant challenge of maintaining integrity in the peer-review process in the face of increasing AI-generated content. The research primarily addresses the need for distinguishing between human-authored and AI-generated peer reviews, with a focus on content possibly generated by models like ChatGPT. This is particularly pertinent in the context of preserving the trustworthiness and rigor of academic publishing.

Core Contributions

The authors introduce two primary models aimed at detecting AI-generated reviews:

  1. Term Frequency (TF) Model: This model leverages the repetitive nature of AI-generated text, analyzing the frequency of tokens to identify potential AI authorship.
  2. Review Regeneration (RR) Model: The RR approach involves regenerating a review by prompting an AI model and quantifying the similarity of the regenerated output with the original review. This model operates on the premise that AI systems, when re-prompted, tend to generate consistent outputs due to their inherent structure and training paradigms.

In addition to the models, the authors propose a defensive strategy against paraphrasing attacks, which are commonly employed to bypass AI detection systems. This strategy involves modifying tokens in regenerated reviews to effectively counteract the paraphrasing and ensure the robustness of their detection models.

Experimental Results

The paper provides empirical evidence showing that both the TF and RR models outperform existing AI text detectors. The RR model, in particular, demonstrates robustness against token attacks and paraphrasing. Specifically, the RR model maintains its performance under adversarial conditions, showcasing its potential as a robust tool for detecting AI-generated content.

Implications and Future Directions

The implications of this research are manifold. Practically, the models can serve as valuable tools for editors and conference chairs to safeguard the peer-review process. Theoretically, the work sets a foundation for further exploration into AI-generated text detection, potentially influencing guidelines and policies surrounding the use of AI in academic contexts.

Future developments in AI technologies might necessitate continuous adaptation and refinement of these detection techniques. Given the rapid evolution of LLMs, future research could focus on multi-modal approaches that integrate textual analysis with other forms of content representation, aiming to enhance the accuracy and resilience of AI detection systems.

In conclusion, while AI technologies like ChatGPT offer transformative potential in various domains, their application within sensitive contexts such as academic peer review calls for vigilant oversight and sophisticated detection methodologies, as exemplified by this paper. The authors' contributions provide an important step towards ensuring the integrity of scientific discourse in the age of AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sandeep Kumar (143 papers)
  2. Mohit Sahu (1 paper)
  3. Vardhan Gacche (1 paper)
  4. Tirthankar Ghosal (14 papers)
  5. Asif Ekbal (74 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com