Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior (2404.18820v4)

Published 29 Apr 2024 in eess.IV and cs.CV
Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior

Abstract: Image compression at extremely low bitrates (below 0.1 bits per pixel (bpp)) is a significant challenge due to substantial information loss. In this work, we propose a novel two-stage extreme image compression framework that exploits the powerful generative capability of pre-trained diffusion models to achieve realistic image reconstruction at extremely low bitrates. In the first stage, we treat the latent representation of images in the diffusion space as guidance, employing a VAE-based compression approach to compress images and initially decode the compressed information into content variables. The second stage leverages pre-trained stable diffusion to reconstruct images under the guidance of content variables. Specifically, we introduce a small control module to inject content information while keeping the stable diffusion model fixed to maintain its generative capability. Furthermore, we design a space alignment loss to force the content variables to align with the diffusion space and provide the necessary constraints for optimization. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches in terms of visual performance at extremely low bitrates. The source code and trained models are available at https://github.com/huai-chang/DiffEIC.

Proposed Techniques for Extreme Image Compression Utilizing Latent Feature Guidance and Diffusion Models

Introduction to Extreme Image Compression

Image compression, an essential procedure for efficient data transmission and storage, has seen practical deployment through various standards like JPEG2000 and VVC. However, these conventional methods falter at extremely low bitrates by producing visually unappealing compression artifacts or overly smooth images. Addressing this challenge, recent advancements in deep learning have steered toward leveraging generative models to enhance compression at low bitrates significantly.

Methodology

This paper introduces a nuanced method for image compression at below 0.1 bits per pixel (bpp), leveraging compressive Variational Autoencoders (VAEs) and pre-trained diffusion models infused with external content-guided modulation. This hybrid approach comprises two main components:

  1. Latent Feature-Guided Compression Module (LFGCM): Utilizing compressive VAEs, this module initially encodes and compresses input images into content variables, preparing them for subsequent decoding. It introduces external guidance to align these variables better with the diffusion spaces, employing transform coding paradigms for initial data reduction.
  2. Conditional Diffusion Decoding Module (CDDM): This module decodes content variables into images, employing a pre-trained stable diffusion model fixed during training to leverage its powerful generative properties for better image reconstruction. It innovatively injects content attributes through trainable control modules, refining output quality.

Empirical Validation

In evaluating this model, extensive experiments across standard datasets like Kodak, Tecnick, and CLIC2020 have demonstrated superior performance over existing methods, particularly in preserving perceptual quality and fidelity at extremely low bitrates. Notably, the method described outperforms contemporary approaches in terms of visual performance metrics such as LPIPS, FID, and KID, particularly excelling in scenarios where the bit-rate constraints are stringent.

  • Quantitative Performance: The proposed method markedly enhances bitrate savings while maintaining compelling image quality, signifying improvements over both traditional codecs and recent deep learning-based methods.
  • Qualitative Assessments: Visual comparisons further substantiate the quantitative findings, with the proposed method consistently delivering visually pleasing and detailed reconstructions even at bitrates lower than 0.1 bpp.

Future Perspectives

The fusion of deep learning models for compression, specifically utilizing the generative prowess of diffusion models, marks a promising advancement in the field of image and video codecs. Future studies might explore:

  • Further integration with text-to-image capabilities of diffusion models to enhance semantic fidelity.
  • Reduction of computational demand and inference time to adapt this methodology for broader, real-time applications.

Conclusion

This research delineates a novel framework for extreme image compression using a combination of compressive autoencoders and diffusion-based decoders enhanced by latent feature guidance. By setting new benchmarks in visual and quantitative metrics at ultra-low bitrates, it paves the way for future developments in efficient and high-quality image compression technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhiyuan Li (304 papers)
  2. Yanhui Zhou (4 papers)
  3. Hao Wei (80 papers)
  4. Chenyang Ge (8 papers)
  5. Jingwen Jiang (4 papers)
Citations (2)