Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CityBench: Evaluating the Capabilities of Large Language Model as World Model (2406.13945v1)

Published 20 Jun 2024 in cs.AI, cs.CL, and cs.LG

Abstract: LLMs with powerful generalization ability has been widely used in many domains. A systematic and reliable evaluation of LLMs is a crucial step in their development and applications, especially for specific professional fields. In the urban domain, there have been some early explorations about the usability of LLMs, but a systematic and scalable evaluation benchmark is still lacking. The challenge in constructing a systematic evaluation benchmark for the urban domain lies in the diversity of data and scenarios, as well as the complex and dynamic nature of cities. In this paper, we propose CityBench, an interactive simulator based evaluation platform, as the first systematic evaluation benchmark for the capability of LLMs for urban domain. First, we build CitySim to integrate the multi-source data and simulate fine-grained urban dynamics. Based on CitySim, we design 7 tasks in 2 categories of perception-understanding and decision-making group to evaluate the capability of LLMs as city-scale world model for urban domain. Due to the flexibility and ease-of-use of CitySim, our evaluation platform CityBench can be easily extended to any city in the world. We evaluate 13 well-known LLMs including open source LLMs and commercial LLMs in 13 cities around the world. Extensive experiments demonstrate the scalability and effectiveness of proposed CityBench and shed lights for the future development of LLMs in urban domain. The dataset, benchmark and source codes are openly accessible to the research community via https://github.com/tsinghua-fib-lab/CityBench

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jie Feng (103 papers)
  2. Jun Zhang (1008 papers)
  3. Junbo Yan (4 papers)
  4. Xin Zhang (904 papers)
  5. Tianjian Ouyang (4 papers)
  6. Tianhui Liu (4 papers)
  7. Yuwei Du (6 papers)
  8. Siqi Guo (7 papers)
  9. Yong Li (628 papers)
Github Logo Streamline Icon: https://streamlinehq.com