Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods (2412.05579v2)

Published 7 Dec 2024 in cs.CL and cs.IR

Abstract: The rapid advancement of LLMs has driven their expanding application across various fields. One of the most promising applications is their role as evaluators based on natural language responses, referred to as ''LLMs-as-judges''. This framework has attracted growing attention from both academia and industry due to their excellent effectiveness, ability to generalize across tasks, and interpretability in the form of natural language. This paper presents a comprehensive survey of the LLMs-as-judges paradigm from five key perspectives: Functionality, Methodology, Applications, Meta-evaluation, and Limitations. We begin by providing a systematic definition of LLMs-as-Judges and introduce their functionality (Why use LLM judges?). Then we address methodology to construct an evaluation system with LLMs (How to use LLM judges?). Additionally, we investigate the potential domains for their application (Where to use LLM judges?) and discuss methods for evaluating them in various contexts (How to evaluate LLM judges?). Finally, we provide a detailed analysis of the limitations of LLM judges and discuss potential future directions. Through a structured and comprehensive analysis, we aim aims to provide insights on the development and application of LLMs-as-judges in both research and practice. We will continue to maintain the relevant resource list at https://github.com/CSHaitao/Awesome-LLMs-as-Judges.

LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods

The paper "LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods" authored by Haitao Li et al. presents a detailed examination of the burgeoning paradigm where LLMs serve as evaluators, termed as "LLMs-as-Judges." This research addresses the increasing utilization of LLMs as evaluative tools across various domains due to their substantial capacity for understanding, generating natural language, and processing complex data inputs. The paper systematically surveys the field by analyzing LLMs-as-Judges from five critical perspectives: Functionality, Methodology, Applications, Meta-evaluation, and Limitations.

Key Dimensions of the Survey:

  1. Functionality: The paper dissects the role of LLMs in diverse evaluative tasks. The authors identify three primary applications: Performance Evaluation, Model Enhancement, and Data Construction. LLMs act as assessors of response quality, holistic model evaluations, and serve as enhancements during training and inference. Moreover, they aid in producing high-quality datasets by automating annotation and synthesis processes.
  2. Methodology: The survey categorizes evaluation methodologies into three primary frameworks—Single-LLM systems, Multi-LLM systems, and Human-AI Collaboration systems. The methodologies explore prompt engineering, tuning of models, and post-processing techniques to enhance the efficacy and reliability of LLM judgments. For multi-LLM frameworks, the paper discusses cooperative and competitive communication structures for deriving robust evaluation outcomes.
  3. Applications: LLMs-as-Judges are applied broadly, spanning from general text tasks such as dialogue generation, translation, and summarization, to more specific domains including legal, medical, financial, and educational fields. The ability of LLMs to integrate multimodal data further expands their potential applications.
  4. Meta-evaluation: The paper places emphasis on the importance of meta-evaluation, furnishing a comprehensive review of benchmarks and metrics for assessing LLM-based evaluators. The survey highlights how existing datasets and statistical metrics, such as accuracy, Pearson correlation, and Cohen’s kappa, are employed to align LLM outputs with human judgments.
  5. Limitations: Despite their potential, LLMs-as-Judges face limitations, including biases, vulnerability to adversarial attacks, and inherent weaknesses that impact their performance. The survey expounds on various biases such as position and verbosity biases, and cognitive-related biases including overconfidence. The discussed adversarial attacks underline the need for robust defense mechanisms. Limitations like knowledge recency and hallucination are critically analyzed.

Implications and Future Directions:

The implications of deploying LLMs as judges are extensive in both theoretical and practical contexts. While they offer scalability and objectivity in evaluation, the risks associated with biases, inconsistencies, and over-reliance on training data necessitate caution. Future research directions outlined in the paper aim at making LLM judges more efficient, effective, and reliable. These include automated construction of evaluation criteria, multi-domain adaptability, cross-linguistic capabilities, enhanced robustness, and improved interpretability.

In conclusion, the paper provides a comprehensive survey of the LLMs-as-Judges paradigm, delineating its functionalities, methodologies, applications, and limitations while advocating future research avenues for enhancing the capacity and reliability of LLMs as evaluators. The authors contribute to the foundational understanding of how LLMs can be systematically employed for varied evaluative purposes, emphasizing the transformative potential and the challenges that accompany this innovative paradigm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haitao Li (65 papers)
  2. Qian Dong (25 papers)
  3. Junjie Chen (89 papers)
  4. Huixue Su (1 paper)
  5. Yujia Zhou (34 papers)
  6. Qingyao Ai (113 papers)
  7. Ziyi Ye (19 papers)
  8. Yiqun Liu (131 papers)
Citations (3)
Youtube Logo Streamline Icon: https://streamlinehq.com