Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A First Look at Deep Learning Apps on Smartphones (1812.05448v4)

Published 8 Nov 2018 in cs.LG and cs.CY

Abstract: We are in the dawn of deep learning explosion for smartphones. To bridge the gap between research and practice, we present the first empirical study on 16,500 the most popular Android apps, demystifying how smartphone apps exploit deep learning in the wild. To this end, we build a new static tool that dissects apps and analyzes their deep learning functions. Our study answers threefold questions: what are the early adopter apps of deep learning, what do they use deep learning for, and how do their deep learning models look like. Our study has strong implications for app developers, smartphone vendors, and deep learning R&D. On one hand, our findings paint a promising picture of deep learning for smartphones, showing the prosperity of mobile deep learning frameworks as well as the prosperity of apps building their cores atop deep learning. On the other hand, our findings urge optimizations on deep learning models deployed on smartphones, the protection of these models, and validation of research ideas on these models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mengwei Xu (62 papers)
  2. Jiawei Liu (156 papers)
  3. Yuanqiang Liu (3 papers)
  4. Felix Xiaozhu Lin (33 papers)
  5. Yunxin Liu (58 papers)
  6. Xuanzhe Liu (59 papers)
Citations (169)

Summary

Overview of Empirical Study on Deep Learning in Smartphone Apps

This paper presents a comprehensive empirical paper on the adoption of deep learning (DL) techniques within Android smartphone applications. As the first of its kind, the research analyzes a substantial dataset comprising 16,500 apps from the Google Play Store to ascertain not only the extent of DL integration but also the implications for app developers, smartphone vendors, and those involved in DL research and development.

Methodology

Central to the paper is a static analysis tool designed to detect DL usage within apps, bypassing direct detection of specific DL code patterns in favor of recognizing the presence of DL frameworks such as TensorFlow Lite, Caffe2, and others. The analysis utilizes snapshots of the app market taken in June and September 2018, each containing the top 500 apps across 33 categories. This approach facilitates an understanding of shifts in DL adoption over time.

Findings

  1. Adoption Trends: The paper detects 211 DL apps in September 2018, representing 1.3% of the overall app set but accounting for 11.9% of total app downloads and 10.5% of reviews. This highlights how DL apps tend to be among the most popular apps. Notably, DL apps saw a 27% increase in numbers over the three months studied.
  2. Core Usage: DL serves as a foundational element in many apps, with 81% of identified DL apps relying on it for core functionalities. A significant portion of DL applications focuses on image processing, notably photo beautification (44.5% of DL apps).
  3. Framework Utilization: While general DL frameworks such as TensorFlow continue to be popular, optimized mobile DL frameworks such as TensorFlow Lite are gaining traction, evidenced by a 258% increase in usage over three months.
  4. Model Optimization: Despite potential efficiency improvements, DL models deployed on smartphones often lack advanced optimizations such as quantization and sparsity, with only 6% of models featuring such adjustments.
  5. Security Concerns: The majority of DL models lack adequate security measures, with only 39.2% being obfuscated and 19.2% encrypted, making them vulnerable to intellectual property theft and other security challenges.

Implications

The findings hold significant implications for various stakeholders:

  • App Developers: The evident success of DL in popular apps suggests that developers, even those with limited resources, should consider incorporating DL capabilities to enhance their products.
  • Framework Developers: The demand for frameworks optimized for limited smartphone resources, coupled with urgent needs for model protection, underscores the importance of continuing framework development tailored to mobile environments.
  • Hardware Designers: The usage patterns identified can guide the design of DL accelerators, emphasizing support for commonly used layers such as convolutional and pooling layers.
  • Researchers: The prevalence of lightweight models should drive a shift in DL research focus, fostering solutions best suited for resource-constrained implementations and validating them on models integral to smartphone applications.

Conclusion

This empirical paper bridges the gap between the current state of DL research and its application in mobile device contexts, providing valuable insights into the evolving landscape of DL on smartphones. By highlighting trends, challenges, and opportunities, the paper sets the stage for future developments across the mobile AI ecosystem.