Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

iFormer: Integrating ConvNet and Transformer for Mobile Application (2501.15369v1)

Published 26 Jan 2025 in cs.CV and cs.AI

Abstract: We present a new family of mobile hybrid vision networks, called iFormer, with a focus on optimizing latency and accuracy on mobile applications. iFormer effectively integrates the fast local representation capacity of convolution with the efficient global modeling ability of self-attention. The local interactions are derived from transforming a standard convolutional network, \textit{i.e.}, ConvNeXt, to design a more lightweight mobile network. Our newly introduced mobile modulation attention removes memory-intensive operations in MHA and employs an efficient modulation mechanism to boost dynamic global representational capacity. We conduct comprehensive experiments demonstrating that iFormer outperforms existing lightweight networks across various tasks. Notably, iFormer achieves an impressive Top-1 accuracy of 80.4\% on ImageNet-1k with a latency of only 1.10 ms on an iPhone 13, surpassing the recently proposed MobileNetV4 under similar latency constraints. Additionally, our method shows significant improvements in downstream tasks, including COCO object detection, instance segmentation, and ADE20k semantic segmentation, while still maintaining low latency on mobile devices for high-resolution inputs in these scenarios.

Overview of the iFormer: Integrating ConvNet and Transformer for Mobile Application

The paper introduces iFormer, a novel family of mobile hybrid vision networks designed to optimize both processing latency and accuracy in mobile applications. iFormer is a strategic amalgamation of convolutional neural networks (CNNs) and vision transformers (ViTs), with the intention of leveraging the fast local representation capabilities of CNNs and the efficient global modeling strengths of self-attention mechanisms. This integration is crucial for addressing the challenges of deploying deep learning models on resource-constrained devices, such as smartphones, where real-time processing is paramount for an enhanced user experience, privacy, and security.

The paper highlights the deficiencies of traditional CNNs, primarily their local sliding window mechanism that limits their modeling flexibility. While the advent of ViTs, with their self-attention mechanisms, addresses these issues through capturing global features, their quadratic computational complexity makes them less suitable for mobile platforms. iFormer addresses these challenges by introducing mobile modulation attention, effectively removing memory-intensive operations found in multi-head attention and using a streamlined modulation mechanism to improve dynamic global representational capacity.

Methodology

iFormer consists of a hierarchical architecture divided into four stages. The initial, high-resolution stages employ convolutional operations for rapid local representation. Starting from a "modern" ConvNeXt architecture, the model is progressively streamlined by reducing FLOPs and parameters to ensure low latency suitable for mobile devices. This culminates in a fast convolutional architecture that exhibits strong performance characteristics.

In the later stages, where the resolution is lower, iFormer incorporates single-head modulation self-attention (SHMA), which is designed to reduce memory costs by avoiding the overheads typical of multi-head attention mechanisms. This novel SHMA mechanism modulates spatial contexts and employs a parallel feature extraction branch to enhance informative feature capture. This fusion of outputs aids in maintaining robust performance while compensating for any potential degradation incurred by the use of single-head attention.

The iFormer architecture is claimed to surpass existing lightweight networks in several visual recognition tasks. For example, iFormer-M achieves a Top-1 accuracy of 80.4% on ImageNet-1K with only 1.10 ms latency on an iPhone 13. This performance surpasses that of recent models like MobileNetV4 under similar latency constraints, without relying on advanced training strategies such as knowledge distillation.

Implications and Future Directions

The paper significantly contributes to the development of AI models optimized for mobile devices by proposing an innovative approach to network architecture that effectively balances model complexity with computational efficiency. The implications of this work extend to a variety of practical scenarios, enabling real-time mobile applications such as video processing, augmented reality, and edge computing to process data locally, thus enhancing privacy and security.

Theoretically, the iFormer sets precedent for future research on hybrid models that integrate CNNs and ViTs, paving the way for advancements in efficient network designs tailored for edge and mobile computing environments. It also invites exploration into further optimizations of self-attention mechanisms to advance their deployment in resource-constrained settings.

In conclusion, iFormer exemplifies a methodical approach to designing AI infrastructure for mobile applications by synthesizing the strengths of CNNs and ViTs. This work invites future exploration into improving network efficiency and exploring additional applications and deployment scenarios. The strategies employed in iFormer may inform the continued evolutionary trajectory of machine learning models toward more inclusive and adaptable architectures that can thrive under diverse hardware constraints.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Chuanyang Zheng (21 papers)