Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Post-Training Quantization for Cross-Platform Learned Image Compression (2202.07513v2)

Published 15 Feb 2022 in eess.IV and cs.CV

Abstract: It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues that need to be considered is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is much simpler than presently existing training and fine-tuning based approaches yet still keeps the superior rate-distortion performance of learned image compression. Based on that, we further improve the discretization of the entropy parameters and extend the deterministic inference to fit Gaussian mixture models. With our proposed methods, the current state-of-the-art image compression models can infer in a cross-platform consistent manner, which makes the further development and practice of learned image compression more promising.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dailan He (25 papers)
  2. Ziming Yang (8 papers)
  3. Yuan Chen (113 papers)
  4. Qi Zhang (785 papers)
  5. Hongwei Qin (38 papers)
  6. Yan Wang (733 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.