Macroscopic-scale hardness for generalized (finite-union-of-interval) perceptrons

Show that in the perceptron model where each constraint requires the normalized inner product to lie in a fixed finite union of intervals, there exists a sequence kN=\tilde\Omega(N) such that every ℓ2-stable algorithm under small Gaussian resampling fails with probability at least \Omega(1) to kN-locate a kN-isolated solution (using the same notions of stability and locating as in Theorem 1.10).

Background

Theorem 1.10 provides a constant-success upper bound for stable algorithms in generalized perceptrons defined by a finite union of intervals, but only at submacroscopic scales kN \le \sqrt{N}/(\log N)2. The paper explains that the small-radius restriction arises because, in two-sided constraints, local monotonicity needed for Pitt’s inequality holds only on sufficiently small Hamming balls.

Extending the obstruction to kN=\tilde\Omega(N) would align the generalized perceptron setting with the macroscopic isolation regime and, when combined with the low-degree stability corollary, would imply stronger low-degree hardness at nearly linear degrees.

References

Open problem. Show that in the generalized perceptron model considered in Theorem~\ref{thm:stable-SBP}, stable algorithms cannot $k_N$-locate a $k_N$-isolated solution with probability more than $1-\Omega(1)$, for some $k_N = \tilde\Omega(N)$.

Stable algorithms cannot reliably find isolated perceptron solutions  (2604.00328 - Gong et al., 31 Mar 2026) in Section 6 (Discussion)