Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What is the best model? Application-driven Evaluation for Large Language Models (2406.10307v1)

Published 14 Jun 2024 in cs.CL and cs.AI

Abstract: General LLMs enhanced with supervised fine-tuning and reinforcement learning from human feedback are increasingly popular in academia and industry as they generalize foundation models to various practical tasks in a prompt manner. To assist users in selecting the best model in practical application scenarios, i.e., choosing the model that meets the application requirements while minimizing cost, we introduce A-Eval, an application-driven LLMs evaluation benchmark for general LLMs. First, we categorize evaluation tasks into five main categories and 27 sub-categories from a practical application perspective. Next, we construct a dataset comprising 678 question-and-answer pairs through a process of collecting, annotating, and reviewing. Then, we design an objective and effective evaluation method and evaluate a series of LLMs of different scales on A-Eval. Finally, we reveal interesting laws regarding model scale and task difficulty level and propose a feasible method for selecting the best model. Through A-Eval, we provide clear empirical and engineer guidance for selecting the best model, reducing barriers to selecting and using LLMs and promoting their application and development. Our benchmark is publicly available at https://github.com/UnicomAI/DataSet/tree/main/TestData/GeneralAbility.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shiguo Lian (54 papers)
  2. Kaikai Zhao (7 papers)
  3. Xinhui Liu (6 papers)
  4. Xuejiao Lei (6 papers)
  5. Bikun Yang (3 papers)
  6. Wenjing Zhang (28 papers)
  7. Kai Wang (624 papers)
  8. Zhaoxiang Liu (54 papers)
Citations (1)