Papers
Topics
Authors
Recent
2000 character limit reached

Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity (2510.01171v3)

Published 1 Oct 2025 in cs.CL and cs.AI

Abstract: Post-training alignment often reduces LLM diversity, leading to a phenomenon known as mode collapse. Unlike prior work that attributes this effect to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text as a result of well-established findings in cognitive psychology. We formalize this bias theoretically, verify it on preference datasets empirically, and show that it plays a central role in mode collapse. Motivated by this analysis, we introduce Verbalized Sampling, a simple, training-free prompting strategy to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., "Generate 5 jokes about coffee and their corresponding probabilities"). Comprehensive experiments show that VS significantly improves performance across creative writing (poems, stories, jokes), dialogue simulation, open-ended QA, and synthetic data generation, without sacrificing factual accuracy and safety. For instance, in creative writing, VS increases diversity by 1.6-2.1x over direct prompting. We further observe an emergent trend that more capable models benefit more from VS. In sum, our work provides a new data-centric perspective on mode collapse and a practical inference-time remedy that helps unlock pre-trained generative diversity.

Summary

  • The paper introduces Verbalized Sampling to address mode collapse by mitigating typicality bias inherent in human preference data.
  • The paper demonstrates that using VS increases creative output diversity by 1.6 to 2.1 times compared to direct prompting.
  • Experimental results across tasks confirm that VS enhances generative diversity in LLMs while preserving factual accuracy and safety.

Summary of "Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity"

Introduction

The paper "Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity" addresses a critical issue observed in LLMs after post-training alignment: mode collapse. Mode collapse occurs when models produce a limited set of outputs, constraining their diversity, which is crucial for tasks such as creative writing and dialogue simulation. This phenomenon has traditionally been attributed to algorithmic limitations. However, the authors identify a fundamental data-level cause: typicality bias in preference data, deriving from the human tendency to favor more typical or familiar text due to cognitive factors.

Identifying Mode Collapse and Typicality Bias

The authors provide a comprehensive analysis demonstrating that even with optimal reward models and learning processes, inherent biases within preference datasets can drive mode collapse. They specifically highlight typicality bias — a bias where annotators prefer more typical responses — as a pervasive factor. This is explained with an analytical model, and its effect is confirmed through empirical verification across multiple datasets. The research outlines that mode collapse is related to the algorithmic sharpness of distribution instigated by this bias, leading models to favor frequent and familiar text.

Verbalized Sampling Methodology

To counter mode collapse, the paper introduces "Verbalized Sampling" (VS), a strategically crafted prompting method that requests LLMs to verbalize a probability distribution over generated responses. For example, models are prompted to generate multiple responses with associated probabilities (e.g., "Generate 5 jokes about coffee and their corresponding probabilities"). This method enables LLMs to bypass typicality bias by ensuring that different samples do not collapse into a singular mode.

Experimental Validation

The authors validate the efficacy of Verbalized Sampling through extensive experiments across diverse tasks such as creative writing (poems, stories, jokes), dialogue simulation, open-ended question answering, and synthetic data generation. For instance, the implementation of VS increased diversity significantly in creative writing tasks, boosting diversity by 1.6 to 2.1 times compared to direct prompting methods. The approach also improved alignment with LLMs' inherent diverse outputs without compromising factual accuracy and safety.

The findings indicate an emergent trend; more capable models benefit more from VS, reflecting on their ability to retain enhanced diverse qualities when subjected to verbalized sampling strategies. This opens up possibilities for real-world applications in areas requiring generative diversity, such as social simulations and richer hypothesis generation.

Conclusion

In conclusion, "Verbalized Sampling" offers a pragmatic, training-free solution for mitigating mode collapse and enhancing the generative diversity of LLMs, setting the stage for more creative and varied model applications while retaining accuracy and safety. This research provides a new data-centric lens to analyze and improve aligned models, emphasizing the role of human preference biases in shaping LLM behaviors.

Whiteboard

Paper to Video (Beta)

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 91 tweets with 5078 likes about this paper.