Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Real-Time Execution of Large-scale Language Models on Mobile (2009.06823v2)

Published 15 Sep 2020 in cs.CL and cs.LG

Abstract: Pre-trained large-scale LLMs have increasingly demonstrated high accuracy on many NLP tasks. However, the limited weight storage and computational speed on hardware platforms have impeded the popularity of pre-trained models, especially in the era of edge computing. In this paper, we seek to find the best model structure of BERT for a given computation size to match specific devices. We propose the first compiler-aware neural architecture optimization framework. Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants. We evaluate our model on several NLP tasks, achieving competitive results on well-known benchmarks with lower latency on mobile devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU with 0.5-2% accuracy loss compared with BERT-base. Our overall framework achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor accuracy loss.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Wei Niu (68 papers)
  2. Zhenglun Kong (33 papers)
  3. Geng Yuan (58 papers)
  4. Weiwen Jiang (62 papers)
  5. Jiexiong Guan (8 papers)
  6. Caiwen Ding (98 papers)
  7. Pu Zhao (82 papers)
  8. Sijia Liu (204 papers)
  9. Bin Ren (136 papers)
  10. Yanzhi Wang (197 papers)
Citations (7)