Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context Tree based Image Contour Coding using A Geometric Prior (1604.08001v1)

Published 27 Apr 2016 in cs.MM

Abstract: If object contours in images are coded efficiently as side information, then they can facilitate advanced image / video coding techniques, such as graph Fourier transform coding or motion prediction of arbitrarily shaped pixel blocks. In this paper, we study the problem of lossless and lossy compression of detected contours in images. Specifically, we first convert a detected object contour composed of contiguous between-pixel edges to a sequence of directional symbols drawn from a small alphabet. To encode the symbol sequence using arithmetic coding, we compute an optimal variable-length context tree (VCT) $\mathcal{T}$ via a maximum a posterior (MAP) formulation to estimate symbols' conditional probabilities. MAP prevents us from overfitting given a small training set $\mathcal{X}$ of past symbol sequences by identifying a VCT $\mathcal{T}$ that achieves a high likelihood $P(\mathcal{X}|\mathcal{T})$ of observing $\mathcal{X}$ given $\mathcal{T}$, and a large geometric prior $P(\mathcal{T})$ stating that image contours are more often straight than curvy. For the lossy case, we design efficient dynamic programming (DP) algorithms that optimally trade off coding rate of an approximate contour $\hat{\mathbf{x}}$ given a VCT $\mathcal{T}$ with two notions of distortion of $\hat{\mathbf{x}}$ with respect to the original contour $\mathbf{x}$. To reduce the size of the DP tables, a total suffix tree is derived from a given VCT $\mathcal{T}$ for compact table entry indexing, reducing complexity. Experimental results show that for lossless contour coding, our proposed algorithm outperforms state-of-the-art context-based schemes consistently for both small and large training datasets. For lossy contour coding, our algorithms outperform comparable schemes in the literature in rate-distortion performance.

Citations (12)

Summary

We haven't generated a summary for this paper yet.