Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating LLM-corrupted Crowdsourcing Data Without Ground Truth (2506.06991v1)

Published 8 Jun 2025 in cs.AI, cs.GT, and cs.HC

Abstract: The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of LLMs by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimension training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yichi Zhang (184 papers)
  2. Jinlong Pang (8 papers)
  3. Zhaowei Zhu (29 papers)
  4. Yang Liu (2253 papers)

Summary

Evaluating LLM-Corrupted Crowdsourcing Data Without Ground Truth: A Formal Analysis

The paper "Evaluating LLM-corrupted Crowdsourcing Data Without Ground Truth" addresses the complexity that emerges with the increasing use of LLMs in crowdsourced annotation tasks. The conventional practice of dependent data verification through ground truth is challenged by LLMs, which may generate labels that skew the true distribution of human inputs expected in crowdsourced datasets.

Overview

The authors identify a critical challenge in maintaining data quality used for building generative AI systems when crowdsourced workers might leverage LLMs to execute tasks. Existing LLM detection primarily hinges on evaluating textual content but struggles with discrete response tasks common in multiple-choice labeling. To address this, the researchers explore the potential of peer prediction; a method of evaluating the information inherent in worker responses independently of ground truth. Their approach focuses on developing a mechanism to mitigate LLM-assisted cheating in crowdsourcing, aiming squarely at tasks like annotation where output is discrete.

Methodology

The core proposal is a training-free scoring mechanism underpinned by theoretical assurances within a crowdsourcing model accommodating LLM collusion. This method draws on conditioning the correlation between worker submissions on (a subset of) LLM-generated data that the task requester can monitor. The mechanism is an extension of prior peer prediction research, designed to preserve inference validity without resorting to established ground truth.

The authors construct two regimes to certify their method: addressing general forms of cheating, and naive strategies in which workers mix high-effort and low-effort signals such as those generated by LLMs. The conditioned peer prediction mechanism developed assures that workers' high-effort signals (genuine human responses), verified through the absence of conjectured mutual dependencies, would lead to higher scores than low-effort, LLM-assisted responses.

Strong Claims and Empirical Evidence

The paper makes a bold claim that their conditioned scoring mechanism can detect low-effort approaches within crowdsourced datasets, using naive strategies. They back this with empirical studies on real-world datasets, analyzing the efficacy across subjective labeling tasks (e.g., toxicity and preference labeling) with samples from five different LLMs. Their findings suggest that when worker responses are compared against models generated from similar data, human contributions present stronger correlations than those generated by LLMs.

Implications and Theoretical Insights

This paper provides implications for the reliability and efficacy of crowdsourcing in AI development. By sidestepping reliance on ground truth for validation, it not only aligns better with the realities of modern AI development but also offers a scalable alternative to deployment in varied crowdsourcing contexts. This advantage might help mitigate over-reliance on sparse or skewed datasets dominating AI training phases.

The authors advocate that while machine-generated responses can sometimes trump human input in coherence or accuracy, broad use of LLMs contradicts the foundational aim of diversifying data through authentic human insights. Practically, preventing such skews in dataset preparation potentiates broader model generalization capabilities, a cornerstone of robust AI systems.

Future Developments

Future exploration steered by the findings from this work could delve into refining these methods for greater nuance, especially in scenarios where LLMs might continue to evolve and present closer semblances to human decision patterns. Additionally, integrating more adaptive algorithms could further advance detection capabilities within mixed worker AI-assisted environments.

Conclusion

In essence, this paper introduces a systematically validated method to handle the complexities of LLM involvement in crowdsourcing, providing a theoretical and empirical base for reinforcing data integrity. This strategic application of conditioned peer prediction holds promise for evolving crowdsourcing practices to better counter the nuanced challenges posed by AI advancements themselves, ensuring more consistent and reliable AI development outcomes.

Youtube Logo Streamline Icon: https://streamlinehq.com