Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks (2412.15204v1)

Published 19 Dec 2024 in cs.CL and cs.AI
LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks

Abstract: This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repository understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2. The project is available at https://longbench2.github.io.

LongBench v2: Advancing Evaluations in Long-Context LLMs

The paper "LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks" addresses a critical gap in the evaluation of LLMs by introducing a comprehensive benchmark specifically designed to test LLM capabilities in handling long-context problems. This benchmark, LongBench v2, is a robust advancement aimed at evaluating models in real-world multitasks that require intricate reasoning and deep understanding across extensive text lengths, ranging from 8,000 to 2 million words.

The LongBench v2 project involves the formulation of 503 challenging multiple-choice questions categorized into six primary task types: single-document QA, multi-document QA, long in-context learning, dialogue history understanding, code repository understanding, and structured data understanding. The benchmark’s scope and realism are ensured through data sourced from a diverse group of nearly 100 highly educated professionals. The benchmark employs both automated and manual review processes, resulting in the fact that even human experts achieve only a 53.7% accuracy rate within a stringent 15-minute time frame. This is a crucial finding, highlighting the complexity and depth required to navigate the tasks presented.

Prominent results from the evaluation process demonstrate that the best-performing model, o1-preview, achieved a 57.7% accuracy rate, surpassing the human benchmark by 4%. This points to the potential of LLMs to exceed human performance in specific long-context reasoning tasks when inference-time compute is sufficiently scaled. However, many models performed significantly below this benchmark. This wide variability in model performance underscores the importance of further research into scaling inference-time reasoning capabilities.

LongBench v2’s design contrasts sharply with existing benchmarks that often focus on extractive questioning and utilize synthetic tasks. Such limitations can result in an overstatement of a model’s true capabilities to comprehend and reason with long contextual information. In particular, LongBench v2 seeks to test models through multiple-choice formats, which offer a more reliable metric for model evaluation, avoiding issues associated with unreliable metrics like F1 or ROUGE scores which are common in some other benchmarks.

The structured approach used to curate LongBench v2's tasks warrants close examination. These tasks are designed to stress-test capabilities beyond mere data retrieval, pushing the threshold towards what genuine comprehension and reasoning entail over long textual inputs. For example, multi-document QA tasks test the model’s ability to synthesize information from multiple sources, while long-dialogue history tasks evaluate a model's memories and understanding over sequential exchanges.

From a practical and theoretical standpoint, this benchmark holds significant implications for the future development of LLMs. The results point towards the importance of enhancing test-time reasoning abilities and also suggest that retrieval-augmented generation (RAG) could be leveraged more effectively across various context lengths. Furthermore, the findings underscore the necessity for continued research into improving models' abilities to maintain reasoning efficacy despite increased context length, an aspect that remains challenging as demonstrated by the disparity in performance over shorter versus longer text contexts.

In conclusion, LongBench v2 not only provides an essential tool for gauging the current state of long-context LLMs but also sets a standard that future developments can build upon, urging advancements both in model architecture and evaluation strategies. It paves the way for developing models that maintain or even improve their reasoning capabilities with increasing input lengths, steps that are crucial in the pursuit of AI systems that can handle real-world information complexity efficiently and accurately.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Yushi Bai (31 papers)
  2. Shangqing Tu (18 papers)
  3. Jiajie Zhang (30 papers)
  4. Hao Peng (291 papers)
  5. Xiaozhi Wang (51 papers)
  6. Xin Lv (38 papers)
  7. Shulin Cao (23 papers)
  8. Jiazheng Xu (10 papers)
  9. Lei Hou (127 papers)
  10. Yuxiao Dong (119 papers)
  11. Jie Tang (302 papers)
  12. Juanzi Li (144 papers)
Citations (1)
Github Logo Streamline Icon: https://streamlinehq.com