Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications (2204.11786v1)

Published 25 Apr 2022 in cs.LG, cs.CL, and cs.CV

Abstract: Deep neural networks (DNNs) have achieved unprecedented success in the field of AI, including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Han Cai (79 papers)
  2. Ji Lin (47 papers)
  3. Yujun Lin (23 papers)
  4. Zhijian Liu (41 papers)
  5. Haotian Tang (28 papers)
  6. Hanrui Wang (49 papers)
  7. Ligeng Zhu (22 papers)
  8. Song Han (155 papers)
Citations (91)