Published 31 Mar 2026 in cs.CC, cs.DS, math-ph, and math.PR | (2604.00328v1)
Abstract: We study the binary perceptron, a random constraint satisfaction problem that asks to find a Boolean vector in the intersection of independently chosen random halfspaces. A striking feature of this model is that at every positive constraint density, it is expected that a $1-o_N(1)$ fraction of solutions are \emph{strongly isolated}, i.e. separated from all others by Hamming distance $Ω(N)$. At the same time, efficient algorithms are known to find solutions at certain positive constraint densities. This raises a natural question: can any isolated solution be algorithmically visible? We answer this in the negative: no algorithm whose output is stable under a tiny Gaussian resampling of the disorder can \emph{reliably} locate isolated solutions. We show that any stable algorithm has success probability at most $\frac{3\sqrt{17}-9}{4}+o_N(1)\leq 0.84233$. Furthermore, every stable algorithm that finds a solution with probability $1-o_N(1)$ finds an isolated solution with probability $o_N(1)$. The class of stable algorithms we consider includes degree-$D$ polynomials up to $D\leq o(N/\log N)$; under the low-degree heuristic \cite{hopkins2018statistical}, this suggests that locating strongly isolated solutions requires running time $\exp(\widetildeΘ(N))$. Our proof does not use the overlap gap property. Instead, we show via Pitt's correlation inequality that after a random perturbation of the disorder, the number of solutions located close to a pre-existing isolated solution cannot concentrate at $1$.
The paper demonstrates that stable algorithms, including low-degree ones, are bounded by a success probability of approximately 0.84233 in finding isolated perceptron solutions.
It employs a combinatorial and probabilistic framework, using Gaussian perturbations and Pitt’s inequality, to establish rigorous algorithmic lower bounds.
The findings reveal that although most solutions are strongly isolated, efficient algorithms only access rare, atypical clusters rather than the typical isolated solutions.
Stable Algorithms and Isolated Solutions in the Binary Perceptron
Introduction and Model Framework
The paper "Stable algorithms cannot reliably find isolated perceptron solutions" (2604.00328) addresses the algorithmic complexity in the binary perceptron problem, particularly focusing on the challenge of algorithmically identifying isolated solutions. The binary perceptron, a classical random CSP, asks for an assignment σ∈{−1,1}N that simultaneously satisfies M random half-space constraints—typically modeled with i.i.d. Gaussian disorder. Explicitly, the solution set for the asymmetric perceptron is: S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}
where Ha∼N(0,IN). The parameter α=M/N encodes the constraint density.
It is a well-established structural property that, for all α>0, the overwhelming majority (1−oN(1) fraction) of solutions are strongly isolated, i.e., separated from any other solution by Hamming distance Ω(N). Nevertheless, efficient algorithms (e.g., multi-scale majority, discrepancy minimization) can often find solutions at positive densities. This interplay raises a central question: do stable polynomial-time algorithms ever locate truly isolated solutions, or do they always find solutions in rare, non-isolated clusters?
Main Results: Hardness of Finding Isolated Solutions
The core contributions of the paper are rigorous algorithmic lower bounds for the task of locating isolated solutions. The analysis is sharply focused on stable algorithms, that is, algorithms whose output is insensitive (in ℓ2 norm) to the perturbation of the disorder by small, independent Gaussian noise. Importantly, this class contains all low-degree polynomial algorithms up to degree o(N/logN), capturing the currently best-understood tractable methods under the low-degree framework [hopkins2018statistical, GJW20, wein2025computational].
The paper proves:
For any stable (including low-degree) algorithm, the probability of successfully locating an isolated solution is universally bounded above by M0, regardless of algorithmic ingenuity or model parameters.
For any stable algorithm that finds a solution with probability tending to M1, the probability that this solution is isolated vanishes as M2.
Formally, for such an algorithm M3,
M4
Furthermore, for successful algorithms: M5
This identifies a strong computational barrier: stable polynomial-time (in fact, sub-exponential time under the low-degree conjecture) search strategies cannot reliably reach the bulk of the solution space.
Technical Approach
A robust combinatorial and probabilistic framework underpins these results, eschewing overlap gap property (OGP) and instead utilizing tools such as Pitt's correlation inequality for Gaussian vectors. The argument proceeds by:
Perturbation and Stability: Considering a pair of nearly identical disorder instances M6 with small Gaussian perturbation. A stable algorithm cannot produce vastly different outputs between these.
Neighborhood Counting: Conditioning on a high-probability event that the algorithm outputs a point close to an isolated solution M7, the analysis bounds the expected number of solutions for M8 in a small Hamming ball (centered at the output). Stability ensures that with high probability, at most one such solution may appear.
Positive Correlation Obstruction: Utilizing Pitt’s inequality, it is shown that the events that two distinct points in this neighborhood are solutions are non-negatively correlated. This correlation makes it impossible for the solution count to concentrate tightly at one, in contradiction to any algorithm that would reliably find isolated solutions.
The derivation of the explicit constant M9 is achieved via an extremal partitioning argument using quadratic equations governed by the correlation inequality.
Low-Degree Polynomial Consequences
Corollaries are established for the low-degree polynomial regime. Any (possibly randomized) degree-S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}0 polynomial function of the disorder, with S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}1, is automatically stable under the metric considered. Therefore, unless S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}2 scales linearly in S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}3, all algorithms accessible to this powerful class cannot reliably find isolated solutions.
Per the low-degree conjecture, this signals that any polynomial-time or sub-exponential time algorithm should fail at the search-for-isolated-solutions task, matching the statistical intuition that locating a typical isolated solution is computationally as hard as brute-force enumeration.
Geometric and Algorithmic Implications
These findings have significant implications for the landscape of the binary and symmetric perceptron models:
Rare, Connected Clusters: While the vast majority of solutions are isolated, the only solutions accessible to efficient algorithms reside in exponentially small clusters that are internally well connected (and, thus, non-isolated). Empirical studies and recent theory [abbe2022binary, baldassi2015subdominant, barbier2024atypical] confirm that these accessible solutions are highly atypical.
Failure of Heuristics Based on Solution Geometry: The results spotlights a model where strong clustering, freezing, and isolation occur for almost all solutions at all positive densities, yet algorithmic tractability persists for special unrepresentative solutions. This severs the often-invoked correspondence between geometric transitions (like clustering or OGP) and computational hardness in random CSPs [krzakala2007gibbs, Garmarnik21].
Sample-Based and Approximate Algorithms: The results generalize to approximate notions as well: algorithms which “almost” reach isolated solutions also fail with high probability. Consistent with recent work, this cannot be circumvented by relaxing the output criterion.
Generalizations: The methods extend to perceptrons defined by unions of intervals (symmetric perceptron, slab constraints), as well as to S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}4-isolated solutions at sub-linear scales, subject to their own technical limitations.
Relevance to Broader Theory and Future Work
This work synthesizes perspectives from probability, statistical physics, complexity theory, and the geometry of high-dimensional polytopes. It bridges the gap between what is possible in principle (the structure of the solution space) and what is possible in practice (algorithmic search), demonstrating the irreducible complexity gap for a widespread LTM.
Open Directions:
Sharpening the upper bound towards S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}5 for the probability remains open. As discussed, local Poisson statistics of solution counts suggest the success probability cannot be strictly S(H,κ)=a=1⋂M{σ∈{−1,1}N:⟨Ha,σ⟩≥κN}6 by this approach, and novel methods will be necessary.
Extending analogous principles and barriers to the spherical perceptron, or to CSPs with hybrid geometries.
Characterizing the complexity class of atypical clusters and developing refined analyses of their structure and algorithmic accessibility.
Further developing technical tools that go beyond OGP and geometric methods for random CSPs.
Conclusion
The paper "Stable algorithms cannot reliably find isolated perceptron solutions" (2604.00328) rigorously establishes that all stable, and hence all low-degree, algorithms fail to reliably find isolated solutions in random binary perceptron instances. The fraction of solution space accessible to efficient algorithms is exponentially small, and the generic solution landscape enforces an algorithmic barrier not captured by conventional geometric heuristics. These results advance understanding of the interplay between computational hardness, solution-space structure, and algorithmic accessibility in high-dimensional optimization and learning theory.