MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases (2406.10290v1)
Abstract: The deployment of LLMs and Large Multimodal Models (LMMs) on mobile devices has gained significant attention due to the benefits of enhanced privacy, stability, and personalization. However, the hardware constraints of mobile devices necessitate the use of models with fewer parameters and model compression techniques like quantization. Currently, there is limited understanding of quantization's impact on various task performances, including LLM tasks, LMM tasks, and, critically, trust and safety. There is a lack of adequate tools for systematically testing these models on mobile devices. To address these gaps, we introduce MobileAIBench, a comprehensive benchmarking framework for evaluating mobile-optimized LLMs and LMMs. MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices. Our two-part open-source framework includes a library for running evaluations on desktops and an iOS app for on-device latency and hardware utilization measurements. Our thorough analysis aims to accelerate mobile AI research and deployment by providing insights into the performance and feasibility of deploying LLMs and LMMs on mobile platforms.
- Rithesh Murthy (12 papers)
- Liangwei Yang (46 papers)
- Juntao Tan (33 papers)
- Tulika Manoj Awalgaonkar (3 papers)
- Yilun Zhou (28 papers)
- Shelby Heinecke (37 papers)
- Sachin Desai (1 paper)
- Jason Wu (28 papers)
- Ran Xu (89 papers)
- Sarah Tan (21 papers)
- Jianguo Zhang (97 papers)
- Zhiwei Liu (114 papers)
- Shirley Kokane (9 papers)
- Zuxin Liu (43 papers)
- Ming Zhu (117 papers)
- Huan Wang (211 papers)
- Caiming Xiong (337 papers)
- Silvio Savarese (200 papers)