Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Testing of Detection Tools for AI-Generated Text (2306.15666v2)

Published 21 Jun 2023 in cs.CL, cs.AI, and cs.CY
Testing of Detection Tools for AI-Generated Text

Abstract: Recent advances in generative pre-trained transformer LLMs have emphasised the potential risks of unfair use of AI generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for artificial intelligence generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.

An Analysis of AI-Generated Text Detection Tools' Efficacy

The proliferation of generative pre-trained transformer LLMs (GPT LLMs), such as OpenAI's ChatGPT, has escalated concerns over academic integrity in higher education institutions (HEIs). The paper, "Testing of Detection Tools for AI-Generated Text," systematically evaluates the performance of AI-generated text detection tools with a focus on their practical applicability and limitations within academic settings.

Evaluation Overview

The paper scrutinizes a comprehensive set of 14 detection tools, including both publicly available online applications and commercial systems like Turnitin and PlagiarismCheck, to gauge their ability to distinguish AI-generated content from human-written texts. The evaluated tools are assessed for accuracy, examining both true and false positive/negative rates. This assessment extends to various content modifications such as obfuscation techniques, machine translation, and paraphrasing, common strategies that students may use to obscure AI-generated origins.

Critical Findings and Numerical Results

The investigators find several consistent patterns across the tools tested. A high rate of false negatives indicates a tendency for tools to misclassify AI-generated text as human-written, particularly when obfuscation methods are involved. For instance, Turnitin emerged as a leading tool with an overall 81% accuracy, but its susceptibility to paraphrased or human-edited AI text remains problematic, underscoring a critical area of vulnerability.

Moreover, manual edits and machine paraphrasing significantly compromised detection accuracy, with tools struggling to accurately identify these modified AI-generated texts. For human-written texts translated from other languages via AI-based tools, the reliability of detections plummeted by approximately 20%, posing additional challenges for multi-lingual academic environments.

Implications and Future Directions

The significant discrepancies in performance between detecting pure AI-generated texts versus those altered through common obfuscation techniques present substantial challenges for academia. The findings suggest a pressing need for HEIs to recalibrate their strategies for addressing potential breaches of academic integrity.

The implications of these findings cannot be overstated. Inaccurate and inconsistent detection capabilities—where false positives could unfairly penalize students, and false negatives may lead to undetected misconduct—highlight the ethical and procedural risks involved in reliance upon these detection technologies as conclusive evidence of academic dishonesty.

Future research avenues are recommended to address the dynamic landscape of AI-generated content. Key focus areas include enhancing detection mechanisms for more complex AI text generation scenarios such as hybrid writing and iterative AI usage, and exploration of detection at a system or cohort level, which may offer novel perspectives on AI's impact on education.

In conclusion, while the ambition to utilize AI text detection tools in academia is clear, the current generation of detection technologies requires significant improvement in dealing with subtler forms of AI engagement. Academic institutions must prioritize preventive measures, such as revising assessment methodologies and fostering robust discussions on the ethical deployment of AI in educational practice, as a pragmatic adjunct to attempting to detect AI-generated work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Debora Weber-Wulff (3 papers)
  2. Alla Anohina-Naumeca (2 papers)
  3. Sonja Bjelobaba (2 papers)
  4. Tomáš Foltýnek (4 papers)
  5. Jean Guerrero-Dib (2 papers)
  6. Olumide Popoola (1 paper)
  7. Petr Šigut (1 paper)
  8. Lorna Waddington (2 papers)
Citations (127)
Youtube Logo Streamline Icon: https://streamlinehq.com