Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EdgeViTs: Competing Light-weight CNNs on Mobile Devices with Vision Transformers (2205.03436v2)

Published 6 May 2022 in cs.CV

Abstract: Self-attention based models such as vision transformers (ViTs) have emerged as a very competitive architecture alternative to convolutional neural networks (CNNs) in computer vision. Despite increasingly stronger variants with ever-higher recognition accuracies, due to the quadratic complexity of self-attention, existing ViTs are typically demanding in computation and model size. Although several successful design choices (e.g., the convolutions and hierarchical multi-stage structure) of prior CNNs have been reintroduced into recent ViTs, they are still not sufficient to meet the limited resource requirements of mobile devices. This motivates a very recent attempt to develop light ViTs based on the state-of-the-art MobileNet-v2, but still leaves a performance gap behind. In this work, pushing further along this under-studied direction we introduce EdgeViTs, a new family of light-weight ViTs that, for the first time, enable attention-based vision models to compete with the best light-weight CNNs in the tradeoff between accuracy and on-device efficiency. This is realized by introducing a highly cost-effective local-global-local (LGL) information exchange bottleneck based on optimal integration of self-attention and convolutions. For device-dedicated evaluation, rather than relying on inaccurate proxies like the number of FLOPs or parameters, we adopt a practical approach of focusing directly on on-device latency and, for the first time, energy efficiency. Specifically, we show that our models are Pareto-optimal when both accuracy-latency and accuracy-energy trade-offs are considered, achieving strict dominance over other ViTs in almost all cases and competing with the most efficient CNNs. Code is available at https://github.com/saic-fi/edgevit.

Overview of the EdgeViTs Paper

The paper "EdgeViTs: Competing Light-weight CNNs on Mobile Devices with Vision Transformers" addresses the challenge of designing efficient Vision Transformers (ViTs) capable of running on mobile and edge devices. ViTs have emerged as a compelling alternative to traditional Convolutional Neural Networks (CNNs) due to their heightened recognition accuracies. However, the self-attention mechanism employed by ViTs inherently possesses quadratic complexity, making them computationally expensive and resource-intensive for mobile applications. Hence, the paper introduces a new family of lightweight vision transformers, termed EdgeViTs, which aim to effectively compete with state-of-the-art lightweight CNNs in terms of on-device efficiency and accuracy.

Key Contributions

The authors introduce the EdgeViTs, a new series of lightweight vision transformers that integrate a novel Local-Global-Local (LGL) information exchange bottleneck. This innovative mechanism optimally combines self-attention with convolutions, allowing for an efficient balance between realizing the advantages of long-range self-attention and the efficiency of local aggregations typical in CNNs. The LGL module strategically reduces the computational burden by employing a sparse attention mechanism that selectively processes only a subset of the tokens to derive global context.

Distinct from earlier proxy-based evaluations focusing on FLOPs or model size—metrics which do not directly correspond to practical efficiency on mobile devices—the paper evaluates EdgeViTs' performance based on on-device latency and energy consumption, providing a more realistic assessment of computational deployment on modern mobile hardware.

Experimental Validation

Experimental results demonstrate that EdgeViTs are Pareto-optimal in the trade-off between accuracy and efficiency on image classification, object detection, and semantic segmentation tasks across mobile platforms. When compared to existing efficient CNN architectures such as MobileNet and EfficientNet, as well as various ViT architectures like MobileViT and PVT, the EdgeViTs consistently deliver superior accuracy-efficiency trade-offs. Specifically, EdgeViT models outperform alternative ViT models regarding both accuracy and computational efficiency, achieving transformations that rival even the most competitive CNNs in mobile contexts.

Technical Implications

The introduction of EdgeViTs is significant as it propels the state-of-the-art in designing ViTs suitable for resource-constrained devices, thereby broadening the scope of potential applications like in smart phones, robotics, and AR/VR devices where traditional architectures may fail to meet timely execution requirements due to computational constraints. The LGL bottleneck embodied within EdgeViTs makes a compelling case for decentralizing complex processing through smart token sampling and local-global attention mechanisms, advancing the potential of ViTs as a staple for on-device vision tasks.

Future Directions

This research opens up avenues for exploring architecture search techniques specifically tailored for lightweight ViT models with resource constraints in mind. Further development could focus on hybrid approaches integrating automated design processes to optimize such transformer architectures for an even broader scope of hardware variations. Additionally, cross-disciplinary applications leveraging the enhanced capabilities of EdgeViTs can be envisioned, such as in edge-based IoT systems where energy efficiency is paramount.

In conclusion, the EdgeViTs paper offers compelling insights into designing computationally efficient vision models capable of leveraging the robustness of transformers, tailored for mobile deployment—an important advance in aligning the trajectory of machine learning models with practical, real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Junting Pan (30 papers)
  2. Adrian Bulat (47 papers)
  3. Fuwen Tan (10 papers)
  4. Xiatian Zhu (139 papers)
  5. Hongsheng Li (340 papers)
  6. Georgios Tzimiropoulos (86 papers)
  7. Brais Martinez (38 papers)
  8. Lukasz Dudziak (4 papers)
Citations (147)