Papers
Topics
Authors
Recent
2000 character limit reached

Perplexity Trap: PLM-Based Retrievers Overrate Low Perplexity Documents (2503.08684v1)

Published 11 Mar 2025 in cs.CL, cs.AI, and cs.IR

Abstract: Previous studies have found that PLM-based retrieval models exhibit a preference for LLM-generated content, assigning higher relevance scores to these documents even when their semantic quality is comparable to human-written ones. This phenomenon, known as source bias, threatens the sustainable development of the information access ecosystem. However, the underlying causes of source bias remain unexplored. In this paper, we explain the process of information retrieval with a causal graph and discover that PLM-based retrievers learn perplexity features for relevance estimation, causing source bias by ranking the documents with low perplexity higher. Theoretical analysis further reveals that the phenomenon stems from the positive correlation between the gradients of the loss functions in language modeling task and retrieval task. Based on the analysis, a causal-inspired inference-time debiasing method is proposed, called Causal Diagnosis and Correction (CDC). CDC first diagnoses the bias effect of the perplexity and then separates the bias effect from the overall estimated relevance score. Experimental results across three domains demonstrate the superior debiasing effectiveness of CDC, emphasizing the validity of our proposed explanatory framework. Source codes are available at https://github.com/WhyDwelledOnAi/Perplexity-Trap.

Summary

Perplexity-Trap: PLM-Based Retrievers Overrate Low Perplexity Documents

The paper "Perplexity-Trap: PLM-Based Retrievers Overrate Low Perplexity Documents" investigates the phenomenon that LLM generated content is often given higher relevance scores by Pretrained LLM (PLM) based retrieval systems, even when its semantic quality is on par with human-written content. This inclination towards LLM-generated content is referred to as source bias and poses a risk to the integrity of the information access ecosystem. The authors hypothesize that PLM-based retrievers prioritize documents with lower perplexity scores, which are more commonly associated with LLM-rewritten documents.

The research introduces a compelling causal graph-based framework to diagnose and separate the causal effect of document perplexity from estimated relevance, arguing that this separation is crucial to understanding the roots of source bias in these retrievers. The paper theorizes that this bias is due to the alignment between the gradients of loss functions from language modeling and retrieval tasks. The authors propose a solution through the Causal Diagnosis and Correction (CDC) method, which attempts to separate the bias induced by perplexity from overall relevance scores. This inference-time debiasing approach is shown to effectively adjust for this bias without compromising the rank quality across domains.

In empirical terms, the research demonstrates the prevalence of the causal effect of perplexity over various PLM-based retrievers on distinct domain datasets using a robust two-stage regression process. It is revealed that the bias is statistically significant, although relatively modest, indicating a systematic tilt towards content with lower perplexity values. These findings endorse the hypothesis that perplexity, unrelated to purely semantic matching, is an unintended factor affecting retrieval relevance scores.

From a theoretical perspective, the paper thoroughly analyzes the interplay between retrieval and language modeling tasks. It explicates how the linear relationship between their gradient structures contributes to the biased retrieval process. This demonstrates that improving a retriever’s language modeling capacity could inadvertently heighten its bias towards low-perplexity documents, thus resulting in a trade-off between retrieval efficacy and exacerbation of source bias.

Practically, these findings suggest a predictable compromise for retrieval systems designed using PLM architectures: enhanced language modeling abilities that improve ranking performance may simultaneously intensify source bias. The paper’s proposed CDC approach signifies a crucial development, offering a promising pathway to mitigate such bias while preserving, or even enhancing, ranking performance.

Looking forward, this research opens avenues for further investigation into other non-causal features that might influence retrieval biases and how similar causal frameworks can be employed to mitigate them. The study also raises questions about the extent to which retrieval systems should debias towards human-written content, balancing information quality and content diversity. As LLMs continue to proliferate, addressing source bias presents both a critical challenge and an opportunity for optimizing information retrieval systems.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 14 likes about this paper.