Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On-device Training: A First Overview on Existing Systems (2212.00824v3)

Published 1 Dec 2022 in cs.LG

Abstract: The recent breakthroughs in ML and deep learning (DL) have catalyzed the design and development of various intelligent systems over wide application domains. While most existing machine learning models require large memory and computing power, efforts have been made to deploy some models on resource-constrained devices as well. A majority of the early application systems focused on exploiting the inference capabilities of ML and DL models, where data captured from different mobile and embedded sensing components are processed through these models for application goals such as classification and segmentation. More recently, the concept of exploiting the mobile and embedded computing resources for ML/DL model training has gained attention, as such capabilities allow (i) the training of models via local data without the need to share data over wireless links, thus enabling privacy-preserving computation by design, (ii) model personalization and environment adaptation, and (ii) deployment of accurate models in remote and hardly accessible locations without stable internet connectivity. This work targets to summarize and analyze state-of-the-art systems research that allows such on-device model training capabilities and provide a survey of on-device training from a systems perspective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shuai Zhu (6 papers)
  2. Thiemo Voigt (19 papers)
  3. JeongGil Ko (9 papers)
  4. Fatemeh Rahimian (2 papers)
Citations (10)