Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking (2308.04945v2)

Published 9 Aug 2023 in cs.CL and cs.AI

Abstract: The recent development and success of LLMs necessitate an evaluation of their performance across diverse NLP tasks in different languages. Although several frameworks have been developed and made publicly available, their customization capabilities for specific tasks and datasets are often complex for different users. In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language. The framework features generic dataset loaders, several model providers, and pre-implements most standard evaluation metrics. It supports in-context learning with zero- and few-shot settings. A specific dataset and task can be evaluated for a given LLM in less than 20 lines of code while allowing full flexibility to extend the framework for custom datasets, models, or tasks. The framework has been tested on 31 unique NLP tasks using 53 publicly available datasets within 90 experimental setups, involving approximately 296K data points. We open-sourced LLMeBench for the community (https://github.com/qcri/LLMeBench/) and a video demonstrating the framework is available online. (https://youtu.be/9cC2m_abk3A)

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Fahim Dalvi (45 papers)
  2. Maram Hasanain (24 papers)
  3. Sabri Boughorbel (12 papers)
  4. Basel Mousi (9 papers)
  5. Samir Abdaljalil (7 papers)
  6. Nizi Nazar (3 papers)
  7. Ahmed Abdelali (21 papers)
  8. Shammur Absar Chowdhury (31 papers)
  9. Hamdy Mubarak (34 papers)
  10. Ahmed Ali (72 papers)
  11. Majd Hawasly (18 papers)
  12. Nadir Durrani (48 papers)
  13. Firoj Alam (75 papers)
Citations (19)

Summary

LLMeBench: A Comprehensive Framework for Flexible Benchmarking of LLMs

The paper introduces LLMeBench, a versatile benchmarking framework designed to evaluate LLMs across diverse NLP tasks and languages. This tool aims to address the existing limitations in customizing benchmarking frameworks for specific applications, providing a comprehensive, adaptable solution that can seamlessly transition across tasks, datasets, and languages. The framework is particularly noteworthy for its ability to accommodate both zero-shot and few-shot learning paradigms.

Framework Architecture and Features

LLMeBench's architecture is modular, consisting of customizable components for datasets, models, evaluation metrics, and benchmarking assets. This modularity allows users to define datasets and models flexibly, integrate new tasks, and establish custom evaluation metrics. The architecture supports various model providers, including OpenAI APIs and Hugging Face inference APIs, as well as FastChat and Petals for local deployments, ensuring versatility in the deployment scenarios.

Some key features of LLMeBench include:

  • Generic Data Loaders: The framework supports numerous data formats, such as CSV, JSON, and datasets from Hugging Face, enabling broad application across different input types.
  • Prompts and In-context Learning: LLMeBench supports zero-shot and few-shot learning paradigms with an efficient mechanism for automatic selection of few-shot examples using maximal marginal relevance-based approaches.
  • Caching and Logging: The framework incorporates efficient caching to minimize redundant API calls, which enhances cost-effectiveness and reduces execution time. This is complemented by robust logging features that facilitate thorough output analysis.
  • Language Agnosticism: While the framework is primarily designed for flexibility, it is inherently language agnostic and has been successfully applied to tasks across 12 languages.

Evaluation Across Numerous Tasks and Datasets

The framework has been validated on 31 unique NLP tasks using 53 datasets, incorporated through extensive experimental setups involving approximately 296K data points. These tasks range from traditional NLP challenges like classification and regression to more specific applications such as machine translation and semantic parsing. This extensive testing underscores the framework's robustness and applicability across a wide array of NLP problems.

Implications and Future Directions

Practically, LLMeBench serves as a valuable resource for researchers and developers wishing to benchmark LLMs without extensive setup or infrastructure requirements. It can significantly streamline the process of evaluating different models or languages by reducing the overhead associated with benchmark customization and execution.

Theoretically, the framework's modular design and flexibility facilitate the exploration of novel benchmarking dimensions. Researchers can explore the impact of different data formats, model configurations, or evaluation metrics on LLM performance and applicability.

Looking towards future developments, the paper suggests the integration of more comprehensive language and task coverage in LLMeBench. Additional features could include adaptable cross-validation datasets, broader community collaboration for extending task types, and continued enhancement of model compatibility and accessibility. Such expansions would further solidify LLMeBench's utility as an indispensable tool for NLP researchers exploring the diverse capabilities of LLMs.

Github Logo Streamline Icon: https://streamlinehq.com