Macroscopic-scale hardness for generalized (finite-union-of-interval) perceptrons
Show that in the perceptron model where each constraint requires the normalized inner product to lie in a fixed finite union of intervals, there exists a sequence kN=\tilde\Omega(N) such that every ℓ2-stable algorithm under small Gaussian resampling fails with probability at least \Omega(1) to kN-locate a kN-isolated solution (using the same notions of stability and locating as in Theorem 1.10).
References
Open problem. Show that in the generalized perceptron model considered in Theorem~\ref{thm:stable-SBP}, stable algorithms cannot $k_N$-locate a $k_N$-isolated solution with probability more than $1-\Omega(1)$, for some $k_N = \tilde\Omega(N)$.
— Stable algorithms cannot reliably find isolated perceptron solutions
(2604.00328 - Gong et al., 31 Mar 2026) in Section 6 (Discussion)