Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On-Device Machine Learning: An Algorithms and Learning Theory Perspective (1911.00623v2)

Published 2 Nov 2019 in cs.LG, cs.DC, and stat.ML

Abstract: The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However, with increasing number of smart devices and improved hardware, there is interest in performing model training on the device. Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state-of-the-art and for identifying open challenges and future avenues of research. However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.). Hence, covering such a large number of topics in a single survey is impractical. This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory. This reformulation allows tools, techniques, and algorithms from a wide variety of research areas to be compared equitably. In addition to summarizing the state-of-the-art, the survey also identifies a number of challenges and next steps for both the algorithmic and theoretical aspects of on-device learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sauptik Dhar (11 papers)
  2. Junyao Guo (7 papers)
  3. Jiayi Liu (60 papers)
  4. Samarth Tripathi (8 papers)
  5. Unmesh Kurup (10 papers)
  6. Mohak Shah (20 papers)
Citations (124)

Summary

We haven't generated a summary for this paper yet.