Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism (2407.10457v1)

Published 15 Jul 2024 in cs.CL and cs.AI

Abstract: Current evaluations of LLMs often overlook non-determinism, typically focusing on a single output per example. This limits our understanding of LLM performance variability in real-world applications. Our study addresses this issue by exploring key questions about the performance differences between greedy decoding and sampling, identifying benchmarks' consistency regarding non-determinism, and examining unique model behaviors. Through extensive experiments, we observe that greedy decoding generally outperforms sampling methods for most evaluated tasks. We also observe consistent performance across different LLM sizes and alignment methods, noting that alignment can reduce sampling variance. Moreover, our best-of-N sampling approach demonstrates that smaller LLMs can match or surpass larger models such as GPT-4-Turbo, highlighting the untapped potential of smaller LLMs. This research shows the importance of considering non-determinism in LLM evaluations and provides insights for future LLM development and evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yifan Song (49 papers)
  2. Guoyin Wang (108 papers)
  3. Sujian Li (83 papers)
  4. Bill Yuchen Lin (72 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.