Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Model Evaluation in Open-ended Text Generation (2108.03578v1)

Published 8 Aug 2021 in cs.CL and cs.LG

Abstract: Although current state-of-the-art LLMs have achieved impressive results in numerous natural language processing tasks, still they could not solve the problem of producing repetitive, dull and sometimes inconsistent text in open-ended text generation. Studies often attribute this problem to the maximum likelihood training objective, and propose alternative approaches by using stochastic decoding methods or altering the training objective. However, there is still a lack of consistent evaluation metrics to directly compare the efficacy of these solutions. In this work, we study different evaluation metrics that have been proposed to evaluate quality, diversity and consistency of machine-generated text. From there, we propose a practical pipeline to evaluate LLMs in open-ended generation task, and research on how to improve the model's performance in all dimensions by leveraging different auxiliary training objectives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. An Nguyen (17 papers)
Citations (3)