Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

List Decoding Expander-Based Codes up to Capacity in Near-Linear Time (2504.20333v1)

Published 29 Apr 2025 in cs.DS, cs.CC, cs.IT, and math.IT

Abstract: We give a new framework based on graph regularity lemmas, for list decoding and list recovery of codes based on spectral expanders. Using existing algorithms for computing regularity decompositions of sparse graphs in (randomized) near-linear time, and appropriate choices for the constant-sized inner/base codes, we prove the following: - Expander-based codes constructed using the distance amplification technique of Alon, Edmonds and Luby [FOCS 1995] with rate $\rho$, can be list decoded to a radius $1 - \rho - \epsilon$ in near-linear time. By known results, the output list has size $O(1/\epsilon)$. - The above codes of Alon, Edmonds and Luby, with rate $\rho$, can also be list recovered to radius $1 - \rho - \epsilon$ in near-linear time, with constant-sized output lists. - The Tanner code construction of Sipser and Spielman [IEEE Trans. Inf. Theory 1996] with distance $\delta$, can be list decoded to radius $\delta - \epsilon$ in near-linear time, with constant-sized output lists. Our results imply novel combinatorial as well as algorithmic bounds for each of the above explicit constructions. All of these bounds are obtained via combinatorial rigidity phenomena, proved using (weak) graph regularity. The regularity framework allows us to lift the list decoding and list recovery properties for the local base codes, to the global codes obtained via the above constructions.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.