Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parametric context adaptive Laplace distribution for multimedia compression (1906.03238v4)

Published 28 May 2019 in eess.IV and cs.MM

Abstract: Data compression often subtracts prediction and encodes the difference (residue) e.g. assuming Laplace distribution, for example for images, videos, audio, or numerical data. Its performance is strongly dependent on the proper choice of width (scale parameter) of this parametric distribution, can be improved if optimizing it based on local situation like context. For example in popular LOCO-I \cite{loco} (JPEG-LS) lossless image compressor there is used 3 dimensional context quantized into 365 discrete possibilities treated independently. This article discusses inexpensive approaches for exploiting their dependencies with autoregressive ARCH-like context dependent models for parameters of parametric distribution for residue, also evolving in time for adaptive case. For example tested such 4 or 11 parameter models turned out to provide similar performance as 365 parameter LOCO-I model for 48 tested images. Beside smaller headers, such reduction of number of parameters can lead to better generalization. In contrast to context quantization approaches, parameterized models also allow to directly use higher dimensional contexts, for example using information from all 3 color channels, further pixels, some additional region classifiers, or from interleaving multi-scale scanning - for which there is proposed Haar upscale scan combining advantages of Haar wavelets with possibility of scanning exploiting local contexts.

Citations (7)

Summary

We haven't generated a summary for this paper yet.