Neural Image Compression Using Masked Sparse Visual Representation (2309.11661v1)
Abstract: We study neural image compression based on the Sparse Visual Representation (SVR), where images are embedded into a discrete latent space spanned by learned visual codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword indices that are efficient and cross-platform robust, and the decoder retrieves the embedded latent feature using the indices for reconstruction. Previous SVR-based compression lacks effective mechanism for rate-distortion tradeoffs, where one can only pursue either high reconstruction quality or low transmission bitrate. We propose a Masked Adaptive Codebook learning (M-AdaCode) method that applies masks to the latent feature subspace to balance bitrate and reconstruction quality. A set of semantic-class-dependent basis codebooks are learned, which are weighted combined to generate a rich latent feature for high-quality reconstruction. The combining weights are adaptively derived from each input image, providing fidelity information with additional transmission costs. By masking out unimportant weights in the encoder and recovering them in the decoder, we can trade off reconstruction quality for transmission bits, and the masking rate controls the balance between bitrate and distortion. Experiments over the standard JPEG-AI dataset demonstrate the effectiveness of our M-AdaCode approach.
- Generative adversarial networks for extreme learned image compression. In ICCV, 2019.
- Learning-based image coding: early solutions reviewing and subjective quality evaluation. SPIE Photonics Europe - Optics, Photonics and Digital Technologies for Imaging Applications VI, 2020.
- Variational image compression with a scale hyperprior. In ICLR, 2018.
- Integer networks fro data compression with latent-variable models. In ICLR, 2019.
- The perception-distortion tradeoff. In CVPR, 2018.
- Rethinking lossy compression: The rate-distortion-perception tradeoff. arXiv preprint: arXiv:1901.07821, 2019.
- Real-rorld blind super-resolution via feature matching with implicit high-resolution priors. In ACM MM, 2022.
- A simple framework for contrastive learning of visual representations. In ICML, 2020.
- Learned image compression with discretized gaussian mixture likelihoods and attention modules. In CVPR, 2020.
- Image compression with product quantized masked image modeling. arXiv preprint: arXiv:2212.07372, 2022.
- Taming transformers for high-resolution image synthesis. In CVPR, 2021.
- Generative adversarial nets. In NeurIPS, 2014.
- Checkerboard context model for efficient learned image compression. In CVPR, 2021.
- Masked autoencoders are scalable vision learners. In CVPR, 2022.
- Contrastive masked autoencoders are stronger vision learners. In ICLR, 2023.
- Categorical reparameterization with gumbel-softmax. In ICLR, 2017.
- Adaptive human-centric video compression for humans and machines. In CVPRW, 2023.
- Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
- Decoding billions of integers per second through vectorization. arXiv preprint, arXiv:1209.2137, 2021.
- Simd compression and the intersection of sorted integers. arXiv preprint,arXiv:1401.6399, 2020.
- Mage: Masked generative encoder to unify representation learning and image synthesis. In CVPR, 2023.
- Swinir: Image restoration using swin transformer. In ICCV, 2021.
- Learning image-adaptive codebooks for class-agnostic image restoration. In ICCV, 2023.
- Vct: A video compression transformer. arXiv preprint: arXiv:2206.07307, 2022.
- High-fidelity generative image compression. In NeurIPS, 2020.
- Neural discrete representation learning. In NeurIPS, 2017.
- A. Van Den Oord and O. Vinyals amd K. Kavukcuoglu. Neural discrete representation learning. In NeurIPS, 2017.
- White paper. Iso/iec jtc 1/sc29/wg1 n90049 white paper on jpeg ai scope and framework v1.0. 2021.
- Entroformer: A transformer-based entropy model for learned image compression. arXiv preprint: arXiv:2202.05492, 2022.
- One-shot free-view neural talking-head synthesis for video conferencingr. In CVPR, 2021.
- Restoreformer: High-quality blind face restoration from undegraded key-value pairs. In CVPR, 2022.
- Designing a practical degradation model for deep blind image super-resolution. In ICCV, 2021.
- The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
- Get the best of the three worlds: Real-time neural image compression in a non-gpu environment. In ACM MM, 2021.
- Towards robust blind face restoration with codebook lookup transformer. In NeurIPS, 2022.
- The devil is in the details: Window-based attention for image compression. In CVPR, 2022.