Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression (2203.02452v2)

Published 4 Mar 2022 in eess.IV, cs.CV, and cs.LG

Abstract: Entropy modeling is a key component for high-performance image compression algorithms. Recent developments in autoregressive context modeling helped learning-based methods to surpass their classical counterparts. However, the performance of those models can be further improved due to the underexploited spatio-channel dependencies in latent space, and the suboptimal implementation of context adaptivity. Inspired by the adaptive characteristics of the transformers, we propose a transformer-based context model, named Contextformer, which generalizes the de facto standard attention mechanism to spatio-channel attention. We replace the context model of a modern compression framework with the Contextformer and test it on the widely used Kodak, CLIC2020, and Tecnick image datasets. Our experimental results show that the proposed model provides up to 11% rate savings compared to the standard Versatile Video Coding (VVC) Test Model (VTM) 16.2, and outperforms various learning-based models in terms of PSNR and MS-SSIM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. A. Burakhan Koyuncu (5 papers)
  2. Han Gao (78 papers)
  3. Atanas Boev (4 papers)
  4. Georgii Gaikov (2 papers)
  5. Elena Alshina (9 papers)
  6. Eckehard Steinbach (29 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.