Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Pixel-based MIM by Reducing Wasted Modeling Capability (2308.00261v1)

Published 1 Aug 2023 in cs.CV

Abstract: There has been significant progress in Masked Image Modeling (MIM). Existing MIM methods can be broadly categorized into two groups based on the reconstruction target: pixel-based and tokenizer-based approaches. The former offers a simpler pipeline and lower computational cost, but it is known to be biased toward high-frequency details. In this paper, we provide a set of empirical studies to confirm this limitation of pixel-based MIM and propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction. By incorporating this design into our base method, MAE, we reduce the wasted modeling capability of pixel-based MIM, improving its convergence and achieving non-trivial improvements across various downstream tasks. To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures like the standard Vision Transformer (ViT). Notably, when applied to a smaller model (e.g., ViT-S), our method yields significant performance gains, such as 1.2\% on fine-tuning, 2.8\% on linear probing, and 2.6\% on semantic segmentation. Code and models are available at https://github.com/open-mmlab/mmpretrain.

Improving Pixel-based MIM by Reducing Wasted Modeling Capability

This paper addresses the limitations of pixel-based Masked Image Modeling (MIM), a self-supervised learning (SSL) approach in computer vision. Pixel-based MIM, while computationally efficient, tends to focus excessively on high-frequency details due to its objective of reconstructing raw pixel values. The authors propose a novel method to mitigate this issue by incorporating multi-level feature fusion, enabling models to utilize low-level features from shallow layers to enhance pixel-based reconstruction tasks.

Methodology

The authors categorize existing MIM approaches into pixel-based and tokenizer-based frameworks. While the former offers lower computational costs, it exhibits biases towards features capturing high-frequency components. Anchored on this observation, the paper introduces a multi-level feature fusion strategy to integrate shallow layer features into the pixel reconstruction task, thereby improving the convergence and expressiveness of the underlying model, such as the Vision Transformer (ViT).

Experimental findings reveal that these modifications yield considerable performance gains, particularly in smaller architectures like ViT-S. Notable improvements were observed in fine-tuning (1.2%), linear probing (2.8%), and semantic segmentation (2.6%), showcasing the method's efficacy in various downstream tasks.

Key Contributions and Experiments

The paper's core contributions include:

  1. Empirical Analysis: Demonstrating the inherent focus of pixel-based MIM methods on high-frequency components and proposing a corrective strategy through empirical studies.
  2. Fusion Strategy Implementation: Introducing a multi-level feature fusion technique, which involves dynamically integrating shallow layer features across training iterations. This approach optimizes the model’s capacity to capture more comprehensive semantic representations.
  3. Extensive Evaluation: Validating the method's effectiveness via comparative analysis with existing MIM strategies and exploring robustness through OOD datasets such as ImageNet-C and ImageNet-R.
  4. Optimization Insights: Highlighting how the proposed solution flattens the loss landscape and modifies the frequency distribution in latent feature representations, resulting in more balanced and robust feature learning.

Implications and Future Directions

The reduction in wasted modeling capacity through multi-level feature fusion does not only enhance pixel-based MIM’s performance but also narrows the gap between pixel-based approaches and those utilizing pre-trained tokenizers. This innovation has practical significance, potentially lowering computational demands while improving model robustness and efficiency.

Theoretically, this work extends the understanding of feature-level integration in SSL, positioning it as a fundamental aspect of improving pixel-based methodologies. It encourages further exploration into architectural adjustments that can capitalize on readily available image features, thus broadening the scope and application of MIM frameworks.

Future research might focus on refining the selection process of beneficial features across layers or incorporating these insights into alternative MIM models and architectures. The trajectory of such advancements may push the envelope in SSL's applicability across diverse and complex visual tasks, making them more accessible and efficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuan Liu (342 papers)
  2. Songyang Zhang (116 papers)
  3. Jiacheng Chen (37 papers)
  4. Zhaohui Yu (3 papers)
  5. Kai Chen (512 papers)
  6. Dahua Lin (336 papers)
Citations (20)
Github Logo Streamline Icon: https://streamlinehq.com