Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity (2207.03620v3)

Published 7 Jul 2022 in cs.CV

Abstract: Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31x31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61x61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51x51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO.

Scaling Convolutional Neural Networks: A Sparse Approach to Large Kernels

The paper under review presents a compelling exploration into the field of convolutional neural networks (CNNs), specifically examining the training and performance benefits of using large convolutional kernels. These kernels exceed the traditional limits, reaching sizes greater than 31x31, which have been the prime focus in contemporary architectures due to the computational challenges associated with larger dimensions. The researchers propose a novel approach grounded in sparsity principles to manage these challenges, ultimately introducing the Sparse Large Kernel Network (SLaK).

Summary of Findings

The backdrop of this paper highlights the rise of transformers in vision tasks, a trend that has questioned the dominance of CNNs. Despite this, recent advancements in CNNs with large kernels have demonstrated competitive performance, yet these efforts have reached a saturation point when the kernel size surpasses 31x31, as seen in frameworks like RepLKNet. The research aims to bridge this gap by developing a methodology to strategically employ even larger kernels without compromising performance.

The authors devised a two-step recipe for efficiently scaling kernel sizes up to 61x61:

  1. Kernel Decomposition: Instead of utilizing massive monolithic kernels, this method decomposes them into two smaller, parallel, rectangular kernels (M×N and N×M, where N < M), which can then be efficiently scaled to the desired large sizes.
  2. Sparsity Integration: Inspired by biological systems such as the human visual cortex, the SLaK model employs sparsity to mitigate the efficiency problems typically associated with extremely large kernels. This involves employing sparse convolution kernels and dynamically adjusting the sparse connections throughout training, which significantly reduces the model's computational footprint.

Performance and Implications

The SLaK architecture is rigorously evaluated across various standard benchmarks, such as ImageNet for classification, ADE20K for semantic segmentation, and COCO for object detection. Notably, SLaK achieves results on par with leading transformer models like Swin Transformer and surpasses its CNN counterparts including ConvNeXt and RepLKNet, both in accuracy and computational efficiency. These findings underscore the viability of ultrascale kernels when appropriately managed via sparsity and decomposition.

From a theoretical standpoint, this approach challenges the contemporary understanding of kernel design, implying that convolutions with significantly large receptive fields can be practical and beneficial, especially when combined with advanced sparse methodologies. This defies the conventional belief that only small kernels stacked deeply can yield efficient representations for vision tasks.

Future Directions

The research paves the way for future explorations in several directions. Continued adaptation of sparsity techniques could further enhance model efficiency, particularly if coupled with dedicated hardware support for sparse computations, which remains a current limitation due to inadequate support on typical GPUs and TPUs.

Moreover, examining the broader application of sparse large kernels in other domains, such as audio processing or time-series prediction, could uncover additional benefits and use cases. Similarly, integrating learnings from this sparse kernel approach with transformer architectures could yield hybrid models that capitalize on the strengths of both paradigms.

In conclusion, the development of the SLaK architecture represents a significant advance in the quest for more efficient and capable CNN frameworks. By leveraging the principles of sparsity, the authors not only demonstrate the potential of ultra-large kernels but also redefine the architectural possibilities for future deep learning models in vision and beyond.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shiwei Liu (76 papers)
  2. Tianlong Chen (202 papers)
  3. Xiaohan Chen (30 papers)
  4. Xuxi Chen (20 papers)
  5. Qiao Xiao (14 papers)
  6. Boqian Wu (9 papers)
  7. Tommi Kärkkäinen (16 papers)
  8. Mykola Pechenizkiy (118 papers)
  9. Decebal Mocanu (2 papers)
  10. Zhangyang Wang (375 papers)
Citations (148)
Youtube Logo Streamline Icon: https://streamlinehq.com