Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents (2406.08184v1)

Published 12 Jun 2024 in cs.AI and cs.HC

Abstract: LLM-based mobile agents are increasingly popular due to their capability to interact directly with mobile phone Graphic User Interfaces (GUIs) and their potential to autonomously manage daily tasks. Despite their promising prospects in both academic and industrial sectors, little research has focused on benchmarking the performance of existing mobile agents, due to the inexhaustible states of apps and the vague definition of feasible action sequences. To address this challenge, we propose an efficient and user-friendly benchmark, MobileAgentBench, designed to alleviate the burden of extensive manual testing. We initially define 100 tasks across 10 open-source apps, categorized by multiple levels of difficulty. Subsequently, we evaluate several existing mobile agents, including AppAgent and MobileAgent, to thoroughly and systematically compare their performance. All materials are accessible on our project webpage: https://MobileAgentBench.github.io, contributing to the advancement of both academic and industrial fields.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Luyuan Wang (3 papers)
  2. Yongyu Deng (1 paper)
  3. Yiwei Zha (2 papers)
  4. Guodong Mao (1 paper)
  5. Qinmin Wang (1 paper)
  6. Tianchen Min (1 paper)
  7. Wei Chen (1288 papers)
  8. Shoufa Chen (22 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com