Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Much are Large Language Models Contaminated? A Comprehensive Survey and the LLMSanitize Library (2404.00699v3)

Published 31 Mar 2024 in cs.CL

Abstract: With the rise of LLMs in recent years, abundant new opportunities are emerging, but also new challenges, among which contamination is quickly becoming critical. Business applications and fundraising in AI have reached a scale at which a few percentage points gained on popular question-answering benchmarks could translate into dozens of millions of dollars, placing high pressure on model integrity. At the same time, it is becoming harder and harder to keep track of the data that LLMs have seen; if not impossible with closed-source models like GPT-4 and Claude-3 not divulging any information on the training set. As a result, contamination becomes a major issue: LLMs' performance may not be reliable anymore, as the high performance may be at least partly due to their previous exposure to the data. This limitation jeopardizes the entire progress in the field of NLP, yet, there remains a lack of methods on how to efficiently detect contamination.In this paper, we survey all recent work on contamination detection with LLMs, and help the community track contamination levels of LLMs by releasing an open-source Python library named LLMsanitize implementing major contamination detection algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Mathieu Ravaut (17 papers)
  2. Bosheng Ding (16 papers)
  3. Fangkai Jiao (19 papers)
  4. Hailin Chen (11 papers)
  5. Xingxuan Li (17 papers)
  6. Ruochen Zhao (15 papers)
  7. Chengwei Qin (28 papers)
  8. Caiming Xiong (337 papers)
  9. Shafiq Joty (187 papers)
Citations (3)