Overlap Gap and Computational Thresholds in the Square Wave Perceptron (2506.05197v2)
Abstract: Square Wave Perceptrons (SWPs) form a class of neural network models with oscillating activation function that exhibit intriguing ``hardness'' properties in the high-dimensional limit at a fixed signal-to-noise ratio $\alpha = O(1)$. In this work, we examine two key aspects of these models. The first is related to the so-called overlap-gap property, that is a disconnectivity feature of the geometry of the solution space of combinatorial optimization problems proven to cause the failure of a large family of solvers, and conjectured to be a symptom of algorithmic hardness. We identify, both in the storage and in the teacher-student settings, the emergence of an overlap gap at a threshold $\alpha_{\mathrm{OGP}}(\delta)$, which can be made arbitrarily small by suitably increasing the frequency of oscillations $1/\delta$ of the activation. This suggests that in this small-$\delta$ regime, typical instances of the problem are hard to solve even for small values of $\alpha$. Second, in the teacher-student setup, we show that the recovery threshold of the planted signal for message-passing algorithms can be made arbitrarily large by reducing $\delta$. These properties make SWPs both a challenging benchmark for algorithms and an interesting candidate for cryptographic applications.