Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Entroformer: A Transformer-based Entropy Model for Learned Image Compression (2202.05492v2)

Published 11 Feb 2022 in eess.IV and cs.CV

Abstract: One critical component in lossy deep image compression is the entropy model, which predicts the probability distribution of the quantized latent representation in the encoding and decoding modules. Previous works build entropy models upon convolutional neural networks which are inefficient in capturing global dependencies. In this work, we propose a novel transformer-based entropy model, termed Entroformer, to capture long-range dependencies in probability distribution estimation effectively and efficiently. Different from vision transformers in image classification, the Entroformer is highly optimized for image compression, including a top-k self-attention and a diamond relative position encoding. Meanwhile, we further expand this architecture with a parallel bidirectional context model to speed up the decoding process. The experiments show that the Entroformer achieves state-of-the-art performance on image compression while being time-efficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yichen Qian (10 papers)
  2. Ming Lin (65 papers)
  3. Xiuyu Sun (25 papers)
  4. Zhiyu Tan (26 papers)
  5. Rong Jin (164 papers)
Citations (116)

Summary

We haven't generated a summary for this paper yet.