Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Algorithms for Sparse LPN and LSPN Against Low-noise (2407.19215v8)

Published 27 Jul 2024 in cs.CR

Abstract: We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure and runs in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse. While a simple enumeration algorithm takes ${n \choose k}=O(n/k)k$ time, previously known results stills need ${n \choose k/2} = \Omega(n/k){k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in ( k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}<n^k$, sparse LPN has learning algorithms in polynomial time when $m>n{k/2}$. However, much less is known about learning algorithms for constant $k$ like 3 and $m<n{k/2}$ samples, except the Gaussian elimination algorithm of time $e{\eta n}$. Our framework provides a learning algorithm in $e{O(\eta \cdot n{\frac{\delta+1}{2}})}$ time given $\delta \in (0,1)$ and $m \approx n{1+(1-\delta)\cdot \frac{k-1}{2}}$ samples. This improves previous learning algorithms. For example, in the classical setting of $k=3$ and $m=n{1.4}$, our algorithm would be faster than than previous approaches for any $\eta<n{-0.7}$.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com