Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense (2303.13408v2)

Published 23 Mar 2023 in cs.CL, cs.CR, and cs.LG

Abstract: The rise in malicious usage of LLMs, such as fake content creation and academic plagiarism, has motivated the development of approaches that identify AI-generated text, including those based on watermarking or outlier detection. However, the robustness of these detection algorithms to paraphrases of AI-generated text remains unclear. To stress test these detectors, we build a 11B parameter paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering. Using DIPPER to paraphrase text generated by three LLMs (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking, GPTZero, DetectGPT, and OpenAI's text classifier. For example, DIPPER drops detection accuracy of DetectGPT from 70.3% to 4.6% (at a constant false positive rate of 1%), without appreciably modifying the input semantics. To increase the robustness of AI-generated text detection to paraphrase attacks, we introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a LLM API provider. Given a candidate text, our algorithm searches a database of sequences previously generated by the API, looking for sequences that match the candidate text within a certain threshold. We empirically verify our defense using a database of 15M generations from a fine-tuned T5-XXL model and find that it can detect 80% to 97% of paraphrased generations across different settings while only classifying 1% of human-written sequences as AI-generated. We open-source our models, code and data.

Paraphrasing Evades Detectors of AI-Generated Text

The paper "Paraphrasing Evades Detectors of AI-generated Text" addresses the challenge of bypassing current AI text detection systems through paraphrasing strategies. With the advent of LLMs like GPT-3.5, which can generate coherent long-form content, there is a need for robust detection mechanisms to combat malicious uses such as fake news creation and academic plagiarism. This research investigates the effectiveness of these detection systems against texts that have been paraphrased to maintain semantic meaning while altering wording and syntax.

Main Contributions

  1. Development of Paraphrase Generation Model: The paper introduces an 11 billion parameter model named dipper, capable of generating diverse paraphrases while maintaining the original semantic intent. Dipper is fine-tuned to paraphrase text while considering additional context, facilitating content reordering with controllable lexical diversity.
  2. Effectiveness Against Current Detectors: The research demonstrates the vulnerability of existing AI text detectors to paraphrased text. Using dipper to paraphrase outputs from models like GPT-3.5, the detection accuracy of tools such as DetectGPT drops significantly (e.g., from 70.3% to 4.6% with a 1% false positive rate).
  3. Proposed Defense Mechanism: To enhance robustness against paraphrase attacks, a novel retrieval-based defense mechanism is proposed. This approach involves comparing a candidate text against a database of prior model-generated outputs to find semantically similar text. The method has shown success rates of up to 97% in detecting paraphrased content, making it a promising strategy for improving detection resilience.

Experimental Results

  • Paraphrase Evasion: The results indicate a drastic reduction in detection effectiveness across several detectors post-paraphrasing. For example, when applied to the open-ended generation task, dipper reduced DetectGPT's performance from 70.3% to 4.6%.
  • Retrieval-Based Detection: This method, tested on a 15M generation database, shows that retrieval can potentially outperform traditional detection methods. It offers detection accuracy of 80.4% to 97.3% for paraphrased content, proving robust against various paraphrasing intensities.

Implications and Future Directions

The paper highlights significant vulnerabilities in AI-generated text detection systems. It suggests that future efforts should focus on developing multi-faceted detection frameworks, combining watermarking, statistical anomalies, and retrieval methods. Disclosure of previously-generated content databases might become a common practice to ensure text authenticity, akin to search engines indexing pages.

Additionally, there could be ethical concerns regarding privacy, which the paper acknowledges. Thus, any deployment of retrieval-based systems should address these concerns with appropriate privacy-preserving mechanisms.

In conclusion, this research demonstrates the potency of paraphrasing in evading current AI text detectors while introducing a robust retrieval-based detection strategy. Such a dual approach—the revelation of weaknesses and proposal of novel defenses—provides a valuable blueprint for advancing AI-generated text detection technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kalpesh Krishna (30 papers)
  2. Yixiao Song (11 papers)
  3. Marzena Karpinska (19 papers)
  4. John Wieting (40 papers)
  5. Mohit Iyyer (87 papers)
Citations (227)
X Twitter Logo Streamline Icon: https://streamlinehq.com