Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aligning Model Evaluations with Human Preferences: Mitigating Token Count Bias in Language Model Assessments (2407.12847v1)

Published 5 Jul 2024 in cs.CL, cs.AI, and cs.HC

Abstract: The SLAM paper demonstrated that on-device Small LLMs (SLMs) are a viable and cost-effective alternative to API-based LLMs, such as OpenAI's GPT-4, offering comparable performance and stability. However, SLAM also identified discrepancies between human preferences and traditional auto-evaluators. This follow-up paper explores methods to align LLM evaluator preferences with human evaluations by addressing biases, particularly toward higher token counts. We employed Bayesian statistics and a t-test to quantify this bias and developed a recalibration procedure to adjust the GPTScorer. Our findings significantly improve aligning the recalibrated LLM evaluator with human evaluations across multiple use cases. For instance, spearman's ranking correlation score in the Recommendation use case improved from -27.27 to 44.55. These results highlight the importance of accounting for biases in automated evaluations to ensure fair and accurate model assessments. The recalibration process enhances the reliability of automated evaluators, leading to better AI models that align with human values and expectations. This study provides a robust methodology for future research into bias correction and emphasizes the feasibility and benefits of developing human-aligned AI evaluation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Roland Daynauth (6 papers)
  2. Jason Mars (21 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com