Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On-Device Neural Net Inference with Mobile GPUs (1907.01989v1)

Published 3 Jul 2019 in cs.LG, cs.CV, cs.DC, and stat.ML

Abstract: On-device inference of machine learning models for mobile phones is desirable due to its lower latency and increased privacy. Running such a compute-intensive task solely on the mobile CPU, however, can be difficult due to limited computing power, thermal constraints, and energy consumption. App developers and researchers have begun exploiting hardware accelerators to overcome these challenges. Recently, device manufacturers are adding neural processing units into high-end phones for on-device inference, but these account for only a small fraction of hand-held devices. In this paper, we present how we leverage the mobile GPU, a ubiquitous hardware accelerator on virtually every phone, to run inference of deep neural networks in real-time for both Android and iOS devices. By describing our architecture, we also discuss how to design networks that are mobile GPU-friendly. Our state-of-the-art mobile GPU inference engine is integrated into the open-source project TensorFlow Lite and publicly available at https://tensorflow.org/lite.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Juhyun Lee (10 papers)
  2. Nikolay Chirkov (1 paper)
  3. Ekaterina Ignasheva (2 papers)
  4. Yury Pisarchyk (2 papers)
  5. Mogan Shieh (1 paper)
  6. Fabio Riccardi (1 paper)
  7. Raman Sarokin (4 papers)
  8. Andrei Kulik (4 papers)
  9. Matthias Grundmann (31 papers)
Citations (86)