MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices (2506.10443v1)
Abstract: LLMs have demonstrated exceptional performance across a variety of tasks. However, their substantial scale leads to significant computational resource consumption during inference, resulting in high costs. Consequently, edge device inference presents a promising solution. The primary challenges of edge inference include memory usage and inference speed. This paper introduces MNN-LLM, a framework specifically designed to accelerate the deployment of LLMs on mobile devices. MNN-LLM addresses the runtime characteristics of LLMs through model quantization and DRAM-Flash hybrid storage, effectively reducing memory usage. It rearranges weights and inputs based on mobile CPU instruction sets and GPU characteristics while employing strategies such as multicore load balancing, mixed-precision floating-point operations, and geometric computations to enhance performance. Notably, MNN-LLM achieves up to a 8.6x speed increase compared to current mainstream LLM-specific frameworks.
- Zhaode Wang (4 papers)
- Jingbang Yang (3 papers)
- Xinyu Qian (1 paper)
- Shiwen Xing (1 paper)
- Xiaotang Jiang (5 papers)
- Chengfei Lv (22 papers)
- Shengyu Zhang (160 papers)