Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Near-Optimal List-Recovery of Linear Code Families (2502.13877v2)

Published 19 Feb 2025 in cs.IT, math.CO, and math.IT

Abstract: We prove several results on linear codes achieving list-recovery capacity. We show that random linear codes achieve list-recovery capacity with constant output list size (independent of the alphabet size and length). That is, over alphabets of size at least $\ell{\Omega(1/\varepsilon)}$, random linear codes of rate $R$ are $(1-R-\varepsilon, \ell, (\ell/\varepsilon){O(\ell/\varepsilon)})$-list-recoverable for all $R\in(0,1)$ and $\ell$. Together with a result of Levi, Mosheiff, and Shagrithaya, this implies that randomly punctured Reed-Solomon codes also achieve list-recovery capacity. We also prove that our output list size is near-optimal among all linear codes: all $(1-R-\varepsilon, \ell, L)$-list-recoverable linear codes must have $L\ge \ell{\Omega(R/\varepsilon)}$. Our simple upper bound combines the Zyablov-Pinsker argument with recent bounds from Kopparty, Ron-Zewi, Saraf, Wootters, and Tamo on the maximum intersection of a "list-recovery ball" and a low-dimensional subspace with large distance. Our lower bound is inspired by a recent lower bound of Chen and Zhang.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 2 likes.

Upgrade to Pro to view all of the tweets about this paper: