Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 114 tok/s
Gemini 3.0 Pro 53 tok/s Pro
Gemini 2.5 Flash 132 tok/s Pro
Kimi K2 176 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Low Autocorrelation Binary Sequence (LABS)

Updated 15 November 2025
  • LABS is a combinatorial optimization problem for designing binary sequences with minimal aperiodic autocorrelation to maximize Golay's merit factor.
  • Metaheuristic techniques, including GPU-accelerated and quantum-inspired methods, address its NP-hard complexity and exponential search space.
  • Advanced algorithms use branch-and-bound strategies and symmetry breaking to achieve tighter relaxations and efficient navigation in rugged energy landscapes.

The Low Autocorrelation Binary Sequence (LABS) problem is a canonical combinatorial optimization challenge in the design of binary sequences with minimal aperiodic autocorrelation. Its high complexity and intricate energy landscape have made LABS a benchmark for metaheuristic development, quantum algorithm benchmarking, and theoretical analysis. The principal objective is to maximize Golay's merit factor by minimizing the aggregate squared autocorrelation at all nontrivial shifts. Recent advances have driven the search frontier from exhaustive methods for modest NN (sequence length) to massively parallel algorithms, quantum-inspired heuristics, and hybrid frameworks that scale to hundreds of variables.

1. Formal Problem Statement and Statistical Foundations

LABS is defined for sequences S=(s1,,sN)S = (s_1, \dots, s_N) with entries si{±1}s_i \in \{\pm 1\}. The aperiodic autocorrelation at lag kk is

Ck(S)=i=1Nksisi+k,k=1,,N1,C_k(S) = \sum_{i=1}^{N-k} s_i s_{i+k}, \qquad k = 1, \dots, N-1,

with the total "energy" (objective function) given by

EN(S)=k=1N1[Ck(S)]2.E_N(S) = \sum_{k=1}^{N-1} [C_k(S)]^2.

The optimization goal is equivalently cast as maximizing the merit factor

FN(S)=N22EN(S),F_N(S) = \frac{N^2}{2 E_N(S)},

so that

S=argminSEN(S)    S=argmaxSFN(S).S^* = \arg\min_S E_N(S) \iff S^* = \arg\max_S F_N(S).

It is rigorously established that the LABS problem is NP-hard; for large NN exponential scaling is unavoidable with brute-force or exact methods. Random sequences have EFN1/2\mathbb{E} F_N \approx 1/2, whereas the best-known constructions and searches approach FN6.34F_N \approx 6.34 for appending/truncated Legendre families (Katz, 2018). The conjectured upper bound, due to Golay, is F12.3248F_\infty \approx 12.3248.

2. Structure of the Energy Landscape and Local Optima

The configuration space S={±1}N\mathcal{S} = \{\pm 1\}^N is characterized by a rugged, glassy landscape with exponentially many local minima ("basins"), as shown by exhaustive Local Optima Network (LON) analysis up to N=24N=24 (Tomassini, 2022). Key findings include:

  • The number of local optima grows as Vexp(0.47N)|V| \sim \exp(0.47 N).
  • Basin sizes for global minima increase exponentially, but are dwarfed by the proliferation of near-optimal traps.
  • Self-loop weights (probability of staying in a basin after a random flip and descent) increase with NN, implying deeply "sticky" minima.
  • Average shortest path lengths to the global optimum grow linearly in NN, but must traverse an exponential number of detours.
  • Centrality (PageRank) analysis reveals that good optima are topologically central but not easily reachable via naive local search.

These structural properties strongly motivate large-neighborhood perturbation strategies, multi-start methods, and adaptive heuristics that exploit sampled basin information.

3. Exact Algorithms and Bounds

Exhaustive search is intractable for N>66N > 66. State-of-the-art exact solvers are based on branch-and-bound strategies with tight relaxations and symmetry breaking:

  • The algorithm of Packebusch & Mertens (Packebusch et al., 2015) achieves Θ(N1.73N)\Theta(N\,1.73^N) time by combining Mertens', Prestwich's, and Wiggenbrock's lag-wise bounds. The recursive search fixes spins from both ends, checks bounds after each extension, and prunes subtrees above the best-known energy.
  • Prestwich (Prestwich, 2013) further tightens relaxations via cancellation/reinforcement analysis, template-guided value ordering, and symmetry-breaking via lex-leader constraints. This pushes skew-symmetric optimality to N=89N=89 and general optimality to N=66N=66.

A key number-theoretic result (Ukil, 2015) shows that possible energies are spaced by $4$ for fixed NN, with exact formulas for the theoretical minimum. Barker sequences (N = 4, 5, 7, 11, 13) uniquely achieve these minima for N13N \leq 13, and nonexistence is rigorously proven for all odd N>13N > 13 (Niu et al., 2018). Even N>13N > 13 remain open but are widely conjectured impossible for perfect sequences.

4. Metaheuristic and Parallel Algorithms

Due to exponential scaling, metaheuristics dominate for N70N \gtrsim 70:

  • Memetic Tabu Search (MTS): Maintains a population of KK candidates; each child is formed via recombination (probability pcomb=0.9p_{comb}=0.9) or mutation, and locally refined using short-run tabu search. The tabu list prevents immediate reversals, allowing escape from shallow traps.
  • GPU-Accelerated MTS (Zhang et al., 1 Apr 2025): The solver leverages block- and thread-level parallelism on Nvidia A100 GPUs. Each block runs an independent replica of the algorithm, while threads evaluate all one-bit-flip neighbors in parallel. Carefully packed binary data structures allow residence in shared memory for up to N=120N=120. This implementation achieved a 8×8\times26×26\times speedup over a 16-core CPU.

A comparative runtime table:

Problem Size NN GPU TTS (s) CPU-16 TTS (s) Speedup
68 5.15 136.6 26.5
83 ... ... 8–26
  • Self-Avoiding Walks (SAW): The sokol_skew solver (Bošković et al., 2022) runs parallel SAWs in the skew-symmetric subspace. Each walk avoids cycles via a hash table of visited pivots, operates in contiguous blocks (length $8D$, D=(L+1)/2D = (L+1)/2), and achieves up to 387×387\times speedup versus CPU methods. A predictive stopping model based on exponential law ensures probabilistic guarantees (e.g., 99% probability for L223L \leq 223).
  • Hybrid Dual-Step Optimization (Pšeničnik et al., 11 Sep 2024): Combines GPU-parallel SAW in partitioned/skew-symmetric subspaces with subsequent unrestricted priority-queue-guided DFS over the full space. This approach improved merit factors across 450L527450 \leq L \leq 527, outperforming previous stochastic and exhaustive search methods.
  • Socio-Cognitive Mutation Operators (Urbańczyk et al., 8 Nov 2025): Incorporates TOPSIS-like metaheuristic principles into evolutionary algorithms, balancing imitation of top performers and repulsion from poor solutions. Multistep "repel worst" procedures significantly lower mean energy compared to standard genetic operators.

5. Quantum and Quantum-Inspired Approaches

Recent quantum algorithms have targeted LABS as a stringent testbed:

  • Pauli Correlation Encoding (PCE) (Sciorilli et al., 20 Jun 2025): Achieves polynomial qubit reduction (n=O(N)n = \mathcal{O}(\sqrt{N})), and suppresses barren plateaus. A n=6n=6 qubit, 10-layer brickwork circuit (30 two-qubit gates) was sufficient to find exact solutions for N44N \leq 44, with scaling advantage over QAOA and classical heuristics (tabu search base b=1.338b=1.338 vs $1.358$ for even NN).
  • Quantum-Enhanced Memetic Tabu Search (QE-MTS) (Cadavid et al., 6 Nov 2025): Seeds classical MTS populations with high-quality states from digitized counterdiabatic quantum optimization (DCQO). Empirically, QE-MTS suppresses the time-to-solution scaling to O(1.24N)\mathcal{O}(1.24^N) versus classical MTS at O(1.34N)\mathcal{O}(1.34^N), achieving crossover in efficiency for N47N \gtrsim 47.

6. Symmetry, Sequence Classes, and Theoretical Limits

Skew-symmetry for odd NN reduces the dimensionality (search space 2(N+1)/22^{(N+1)/2}). Although optimal sequences often coincide with skew-symmetric ones for low NN, several recent best-known solutions (e.g., N=99N=99, N=107N=107 (Zhang et al., 1 Apr 2025)) break this symmetry, demonstrating that exclusive restriction to skew-symmetry is sub-optimal beyond moderate sizes.

Combinatorial constructions (almost difference sets, d-form functions) enable infinite families with nearly optimal autocorrelation properties, but reach only C(τ)3|C(\tau)| \leq 3 ("almost optimal") outside the exceptional Barker codes (Sun et al., 2017, Zhang et al., 2018). Interleaving techniques and cyclotomic class algebra yield optimal two-level out-of-phase autocorrelation (±2\pm 2) for even period, but not for all NN (Zhang et al., 2018).

7. Future Directions and Open Questions

Ongoing research seeks to:

  • Further reduce the exponential base in solver scaling, via tighter relaxations, better parallelism, or hybrid quantum-classical integration.
  • Extend combinatorial constructions to new residue classes or reduce peak sidelobe levels below $3$ for infinite families.
  • Exploit local optima network metrics to guide adaptive, network-aware heuristics for large NN.
  • Pin down the true asymptotic supremum of the merit factor, a question spanning combinatorial design, analytic number theory, and high-performance computing.
  • Generalize quantum-enhanced metaheuristics and Pauli encoding frameworks to other paradigmatic binary optimization problems.

The LABS problem thus remains a central and challenging domain, stimulating interdisciplinary advances across discrete optimization, algorithm engineering, quantum computation, and combinatorial mathematics.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Low Autocorrelation Binary Sequence (LABS) Problem.