Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dual Aggregation Transformer for Image Super-Resolution (2308.03364v2)

Published 7 Aug 2023 in cs.CV

Abstract: Transformer has recently gained considerable popularity in low-level vision tasks, including image super-resolution (SR). These networks utilize self-attention along different dimensions, spatial or channel, and achieve impressive performance. This inspires us to combine the two dimensions in Transformer for a more powerful representation capability. Based on the above idea, we propose a novel Transformer model, Dual Aggregation Transformer (DAT), for image SR. Our DAT aggregates features across spatial and channel dimensions, in the inter-block and intra-block dual manner. Specifically, we alternately apply spatial and channel self-attention in consecutive Transformer blocks. The alternate strategy enables DAT to capture the global context and realize inter-block feature aggregation. Furthermore, we propose the adaptive interaction module (AIM) and the spatial-gate feed-forward network (SGFN) to achieve intra-block feature aggregation. AIM complements two self-attention mechanisms from corresponding dimensions. Meanwhile, SGFN introduces additional non-linear spatial information in the feed-forward network. Extensive experiments show that our DAT surpasses current methods. Code and models are obtainable at https://github.com/zhengchen1999/DAT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zheng Chen (221 papers)
  2. Yulun Zhang (167 papers)
  3. Jinjin Gu (56 papers)
  4. Linghe Kong (44 papers)
  5. Xiaokang Yang (207 papers)
  6. Fisher Yu (104 papers)
Citations (100)

Summary

Overview of "Dual Aggregation Transformer for Image Super-Resolution"

The paper introduces a novel Transformer-based model, the Dual Aggregation Transformer (DAT), aiming to enhance image super-resolution (SR) by aggregating spatial and channel features effectively. The paper addresses the limitations of traditional convolutional approaches which often struggle with capturing global dependencies crucial for high-quality image reconstruction.

Methodology

The authors propose the Dual Aggregation Transformer (DAT), which is characterized by its ability to perform feature aggregation across spatial and channel dimensions via both inter-block and intra-block mechanisms.

  1. Inter-Block Feature Aggregation:
    • DAT alternates between using spatial window self-attention (SW-SA) and channel-wise self-attention (CW-SA) across successive Transformer blocks. This strategy allows the model to capture comprehensive spatial and channel contexts, optimizing the representation capabilities needed for SR tasks.
  2. Intra-Block Feature Aggregation:
    • An Adaptive Interaction Module (AIM) is introduced to enhance the fusion of features from self-attention and convolutional branches. AIM uses spatial and channel interaction mechanisms to adaptively combine global and local information.
    • The Spatial-Gate Feed-Forward Network (SGFN) is incorporated to integrate additional nonlinear spatial information and address channel redundancy, strengthening the traditional feed-forward network's ability to handle spatial features.

Overall, these dual aggregation strategies are designed to achieve superior feature representation, facilitating high-quality image reconstruction.

Experimental Results

The authors conduct extensive experiments across several benchmark datasets, using upscaling factors of ×2, ×3, and ×4. The results indicate that DAT consistently outperforms existing state-of-the-art methods, notably on challenging datasets such as Urban100 and Manga109.

  • Performance Gains: The paper reports significant improvements in PSNR and SSIM metrics, showcasing DAT’s capability in generating sharper and more accurate images compared to other SR methods.
  • Computation Efficiency: Compared with other models like SwinIR and CAT-A, DAT provides competitive performance with lower computation complexity (FLOPs) and parameter count.

Implications and Future Directions

The introduction of DAT marks an important step in leveraging Transformers for low-level vision tasks, specifically image super-resolution. The dual aggregation approach is proven to effectively integrate spatial and channel information, providing a robust framework for future research.

Potential developments may include:

  • Further refinement of AIM and SGFN modules to optimize computational overhead.
  • Exploration of DAT applications in other related vision tasks requiring enhanced detail preservation and reconstruction.
  • Investigation into the integration of additional attention mechanisms to further augment the model’s adaptability and efficiency.

In conclusion, the Dual Aggregation Transformers present a significant advancement in image super-resolution through innovative feature aggregation strategies, standing as a promising foundation for future enhancements in both theoretical models and practical applications.

X Twitter Logo Streamline Icon: https://streamlinehq.com