(0,1)-CVP: Sub-2^n Algorithmic Advances
- (0,1)-CVP is a lattice problem defined by finding the closest subset sum of basis vectors with binary (0,1) coefficients.
- The breakthrough algorithm reduces complexity from 2^n to approximately (1.7299)^n using split-and-list partitioning and fast triangle detection.
- Reductions to MAX-SAT and minimum-weight clique establish fine-grained equivalences that deepen our understanding of lattice-based cryptographic hardness.
The -Closest Vector Problem (-CVP) is a specialized instance of the Closest Vector Problem (CVP) in lattice theory, central to both theoretical computer science and cryptography. In this variant, one seeks the closest lattice point to a target vector where the lattice point must be a subset sum of the basis vectors—that is, coefficients are restricted to $0$ or $1$. This restriction parallels classic problems in combinatorial optimization and connects deeply to fine-grained complexity conjectures through new algorithmic frameworks and reductions.
1. Formal Definition and Problem Structure
Let denote a full-rank basis of an integer lattice . Given a target vector and distance , the decision problem - is specified as:
Given 0, 1, and 2:
- Return YES if 3 such that 4,
- Otherwise, return NO (i.e., for all 5, 6).
Formally,
7
This formulation constrains the solution space to the 8 subset sums over the basis vectors, mapping directly to fundamental combinatorial enumeration.
2. Algorithmic Advances: Beating the 9 Barrier
The previously best-known exact algorithm for $0$0-CVP required explicit enumeration over all $0$1 possibilities. The principal contribution is an algorithm achieving a running time of $0$2 for exact $0$3-$0$4 given all entries in $0$5 and $0$6 are bounded by $0$7 (Abboud et al., 7 Jan 2025). This surpasses the naïve exhaustive search limit.
Algorithmic Framework
- Split-and-List: Partition the set of indices $0$8 into $0$9 disjoint blocks of size $1$0. For each block, enumerate all subset sums, resulting in $1$1 lists of $1$2 vectors.
- Pairwise Quadratic Decomposition: Leverage the Euclidean norm, expressing
$1$3
and encode the problem as a minimum-weight $1$4-clique problem in a multipartite graph whose vertices are the enumerated partial sums.
- Triangle (3-Clique) Method with Fast Matrix Multiplication: For $1$5, the multipartite graph contains $1$6 vertices. Fast weighted triangle detection, using state-of-the-art matrix multiplication ($1$7), solves the minimum-weight triangle problem in $1$8 time, yielding a total complexity $1$9.
High-Level Pseudocode for the Algorithm
5 Enumeration takes 0 time; triangle detection runs in 1.
3. Reductions to MAX-SAT and Minimum-Weight 2-Clique
Equivalence to Weighted Max-3-SAT
For even 4, there is a polynomial-time Karp reduction 5-6 Weighted Max-7-SAT on 8 variables. Each variable encodes inclusion of a particular basis vector. By constructing weighted 9-SAT clauses whose satisfaction records quadratic or higher-order combinations, the optimum truth assignment coincides (up to a shift) with the minimum lattice distance.
This reduction is tight for all even 0, showing a fine-grained equivalence (up to polynomial factors) between these problems.
Reduction to Minimum-Weight 1-Clique
The quadratic structure of the Euclidean norm allows reduction of 2-3 to minimum-weight 4-Clique detection. Assign each block’s subset sum to a vertex part, with the edge weights as described above. The optimal subset sum then corresponds exactly to the minimum total clique weight. For 5, the fastest algorithm for minimum-weight triangle detection fully translates to the best 6-7 algorithm to date.
4. Complexity Analysis and Fine-Grained Implications
Breaking the 8 barrier is achieved through applying triangle-detection with fast matrix multiplication algorithms, giving runtime 9, where 0 (Abboud et al., 7 Jan 2025). This is a significant super-constant improvement in the exponent over brute force enumeration.
Implications for Complexity Conjectures
- SETH Barriers: Known SETH-based lower bounds for CVP in other 1 norms (where 2 is not even) preclude 3 time algorithms. However, barriers for even 4 (in particular, Euclidean norm 5) were not established, and this result shows explicit progress, demonstrating the Euclidean case is algorithmically easier in this restricted setting.
- Minimum-Weight 6-Clique and APSP: It is widely conjectured that minimum-weight 7-clique remains 8-hard. Any breakthrough in 9-clique detection (e.g., faster than 0) would imply faster 1-2 solutions, thus providing cryptographic support to the 3-clique and, by implication, APSP hardness conjectures.
5. Broader Connections and Cryptographic Consequences
A fine-grained equivalence exists between 4-5 (for even 6) and Weighted Max-7-SAT, meaning improvements for one translate precisely to the other (up to polynomial factors). This establishes direct links between lattice-based cryptographic hardness assumptions (in the worst case) and central fine-grained complexity conjectures—notably SETH (Strong Exponential Time Hypothesis), MAX-SAT, minimum clique, and APSP.
Additionally, prior fine-grained hardness reductions for general 8 all make essential use of the 9 case. Consequently, the new algorithms show that, in the Euclidean setting, such restricted reductions cannot exclude sub-0 time; any unconditional lower bound must leverage the structure of general coefficients outside 1.
6. Summary and Impact
The discovery of a sub-2 exact algorithm for 3-4 establishes a new frontier for both the algorithmic understanding of lattice problems and their connections to classical hypotheses in fine-grained complexity. The equivalences and reductions presented inform ongoing debates regarding the fine-grained hardness of cornerstone combinatorial and cryptographic problems, illustrating how algorithmic advances in specialized lattice settings feed directly into broader complexity-theoretic landscapes (Abboud et al., 7 Jan 2025).