Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Computing the optimal error exponential function for fixed-length lossy coding in discrete memoryless sources (2304.11558v1)

Published 23 Apr 2023 in cs.IT and math.IT

Abstract: The error exponent of fixed-length lossy source coding was established by Marton. Ahlswede showed that this exponent can be discontinuous at a rate $R$, depending on the probability distribution $P$ of the given information source and the distortion measure $d(x,y)$. The reason for the discontinuity in the error exponent is that there exists $(d,\Delta)$ such that the rate-distortion function $R(\Delta|P)$ is neither concave nor quasi-concave with respect to $P$. Arimoto's algorithm for computing the error exponent in lossy source coding is based on Blahut's parametric representation of the error exponent. However, Blahut's parametric representation is a lower convex envelope of Marton's exponent, and the two do not generally agree. The contribution of this paper is to provide a parametric representation that perfectly matches with the inverse function of Marton's exponent, thus avoiding the problem of the rate-distortion function being non-convex with respect to $P$. The optimal distribution for fixed parameters can be obtained using Arimoto's algorithm. Performing a nonconvex optimization over the parameters successfully yields the inverse function of Marton's exponent.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)