Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 105 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation (2509.08825v1)

Published 10 Sep 2025 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs are rapidly transforming social science research by enabling the automation of labor-intensive tasks like data annotation and text analysis. However, LLM outputs vary significantly depending on the implementation choices made by researchers (e.g., model selection, prompting strategy, or temperature settings). Such variation can introduce systematic biases and random errors, which propagate to downstream analyses and cause Type I, Type II, Type S, or Type M errors. We call this LLM hacking. We quantify the risk of LLM hacking by replicating 37 data annotation tasks from 21 published social science research studies with 18 different models. Analyzing 13 million LLM labels, we test 2,361 realistic hypotheses to measure how plausible researcher choices affect statistical conclusions. We find incorrect conclusions based on LLM-annotated data in approximately one in three hypotheses for state-of-the-art models, and in half the hypotheses for small LLMs. While our findings show that higher task performance and better general model capabilities reduce LLM hacking risk, even highly accurate models do not completely eliminate it. The risk of LLM hacking decreases as effect sizes increase, indicating the need for more rigorous verification of findings near significance thresholds. Our extensive analysis of LLM hacking mitigation techniques emphasizes the importance of human annotations in reducing false positive findings and improving model selection. Surprisingly, common regression estimator correction techniques are largely ineffective in reducing LLM hacking risk, as they heavily trade off Type I vs. Type II errors. Beyond accidental errors, we find that intentional LLM hacking is unacceptably simple. With few LLMs and just a handful of prompt paraphrases, anything can be presented as statistically significant.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that replicating 37 annotation tasks across 18 LLMs reveals error rates between 31% and 50% due to inconsistent outputs.
  • The paper finds that Type II errors are predominant, with effect sizes deviating 40-77% from true values, signaling major reliability concerns.
  • The paper recommends human-in-the-loop validation and enhanced transparency to mitigate deliberate manipulation and hidden biases in LLM outputs.

LLM Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation

Introduction

The paper "LLM Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation" addresses the significant variability in outputs from LLMs due to different implementation choices. This variability can lead to systematic biases and errors in downstream analyses, termed as LLM hacking. The paper replicates 37 data annotation tasks using 18 different models to quantify the risk of LLM hacking, revealing that incorrect conclusions occur in approximately one-third of hypotheses for state-of-the-art models. Figure 1

Figure 1: We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.

Empirical LLM Hacking Risk

The paper finds that even state-of-the-art LLMs produce incorrect scientific conclusions in a substantial fraction of cases. The empirical LLM hacking risk ranges from 31% for the best-performing models to 50% for smaller models. Type II errors dominate, with models more frequently missing true effects than fabricating false ones. Even when models correctly identify significant effects, the estimated effect sizes deviate from true values by 40-77% on average. Figure 2

Figure 2: Scaling relationships for LLM hacking risk and annotation performance.

Intentional LLM Hacking

The potential for deliberate manipulation is significant, as model selection and prompt formulation can make almost any hypothesis appear statistically significant. The feasibility of fabricating Type I and Type II errors is alarmingly high, even among top-performing models. This vulnerability suggests that LLMs should not be used as black-box annotators without rigorous validation. Figure 3

Figure 3: Average feasibility rates of LLM hacking and correct conclusions across annotation tasks.

Predictors of LLM Hacking

The paper identifies key predictors of LLM hacking risk, with proximity to statistical significance thresholds being the strongest predictor. Task characteristics account for a significant portion of the explained variance, while model performance contributes less. Surprisingly, prompt engineering choices have minimal impact on reducing LLM hacking risk. Figure 4

Figure 4: LLM hacking risk versus model performance by task.

Mitigating LLM Hacking Risk

Access to human annotations enables multiple mitigation strategies, though each involves trade-offs. Using human annotations alone provides the strongest protection against false positives. Statistical correction techniques can reduce Type I errors but often increase Type II errors. The paper emphasizes the importance of human-in-the-loop designs and transparency in LLM-assisted research. Figure 5

Figure 5: Trade-off between Type I errors and combined Type II+S errors across mitigation strategies.

Conclusion

The integration of LLMs into scientific research workflows requires a fundamental shift in practices. Researchers must treat LLMs as complex instruments requiring careful calibration and validation. The paper provides practical recommendations to limit LLM hacking, advocating for transparency standards and pre-registration of LLM configuration choices. While LLMs offer unprecedented scalability, their use in hypothesis testing demands rigorous safeguards to maintain scientific integrity.

Reddit Logo Streamline Icon: https://streamlinehq.com