Papers
Topics
Authors
Recent
Search
2000 character limit reached

(0,1)-CVP: Sub-2^n Algorithmic Advances

Updated 2 May 2026
  • (0,1)-CVP is a lattice problem defined by finding the closest subset sum of basis vectors with binary (0,1) coefficients.
  • The breakthrough algorithm reduces complexity from 2^n to approximately (1.7299)^n using split-and-list partitioning and fast triangle detection.
  • Reductions to MAX-SAT and minimum-weight clique establish fine-grained equivalences that deepen our understanding of lattice-based cryptographic hardness.

The (0,1)(0,1)-Closest Vector Problem ((0,1)(0,1)-CVP) is a specialized instance of the Closest Vector Problem (CVP) in lattice theory, central to both theoretical computer science and cryptography. In this variant, one seeks the closest lattice point to a target vector where the lattice point must be a subset sum of the basis vectors—that is, coefficients are restricted to $0$ or $1$. This restriction parallels classic problems in combinatorial optimization and connects deeply to fine-grained complexity conjectures through new algorithmic frameworks and reductions.

1. Formal Definition and Problem Structure

Let B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n} denote a full-rank basis of an integer lattice L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}. Given a target vector t∈Zm\mathbf t \in \mathbb Z^m and distance d>0d > 0, the decision problem (0,1)(0,1)-CVP2\mathrm{CVP}_2 is specified as:

Given (0,1)(0,1)0, (0,1)(0,1)1, and (0,1)(0,1)2:

  • Return YES if (0,1)(0,1)3 such that (0,1)(0,1)4,
  • Otherwise, return NO (i.e., for all (0,1)(0,1)5, (0,1)(0,1)6).

Formally,

(0,1)(0,1)7

This formulation constrains the solution space to the (0,1)(0,1)8 subset sums over the basis vectors, mapping directly to fundamental combinatorial enumeration.

2. Algorithmic Advances: Beating the (0,1)(0,1)9 Barrier

The previously best-known exact algorithm for $0$0-CVP required explicit enumeration over all $0$1 possibilities. The principal contribution is an algorithm achieving a running time of $0$2 for exact $0$3-$0$4 given all entries in $0$5 and $0$6 are bounded by $0$7 (Abboud et al., 7 Jan 2025). This surpasses the naïve exhaustive search limit.

Algorithmic Framework

  • Split-and-List: Partition the set of indices $0$8 into $0$9 disjoint blocks of size $1$0. For each block, enumerate all subset sums, resulting in $1$1 lists of $1$2 vectors.
  • Pairwise Quadratic Decomposition: Leverage the Euclidean norm, expressing

$1$3

and encode the problem as a minimum-weight $1$4-clique problem in a multipartite graph whose vertices are the enumerated partial sums.

  • Triangle (3-Clique) Method with Fast Matrix Multiplication: For $1$5, the multipartite graph contains $1$6 vertices. Fast weighted triangle detection, using state-of-the-art matrix multiplication ($1$7), solves the minimum-weight triangle problem in $1$8 time, yielding a total complexity $1$9.

High-Level Pseudocode for the Algorithm

(0,1)(0,1)5 Enumeration takes B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}0 time; triangle detection runs in B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}1.

3. Reductions to MAX-SAT and Minimum-Weight B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}2-Clique

Equivalence to Weighted Max-B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}3-SAT

For even B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}4, there is a polynomial-time Karp reduction B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}5-B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}6 Weighted Max-B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}7-SAT on B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}8 variables. Each variable encodes inclusion of a particular basis vector. By constructing weighted B=(b1,…,bn)∈Zm×nB = (\mathbf b_1,\dots,\mathbf b_n) \in \mathbb Z^{m \times n}9-SAT clauses whose satisfaction records quadratic or higher-order combinations, the optimum truth assignment coincides (up to a shift) with the minimum lattice distance.

This reduction is tight for all even L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}0, showing a fine-grained equivalence (up to polynomial factors) between these problems.

Reduction to Minimum-Weight L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}1-Clique

The quadratic structure of the Euclidean norm allows reduction of L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}2-L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}3 to minimum-weight L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}4-Clique detection. Assign each block’s subset sum to a vertex part, with the edge weights as described above. The optimal subset sum then corresponds exactly to the minimum total clique weight. For L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}5, the fastest algorithm for minimum-weight triangle detection fully translates to the best L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}6-L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}7 algorithm to date.

4. Complexity Analysis and Fine-Grained Implications

Breaking the L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}8 barrier is achieved through applying triangle-detection with fast matrix multiplication algorithms, giving runtime L(B)={∑i=1nzi bi ∣ zi∈Z}\mathcal L(B) = \{\sum_{i=1}^n z_i\,\mathbf b_i\,|\,z_i \in \mathbb Z\}9, where t∈Zm\mathbf t \in \mathbb Z^m0 (Abboud et al., 7 Jan 2025). This is a significant super-constant improvement in the exponent over brute force enumeration.

Implications for Complexity Conjectures

  • SETH Barriers: Known SETH-based lower bounds for CVP in other t∈Zm\mathbf t \in \mathbb Z^m1 norms (where t∈Zm\mathbf t \in \mathbb Z^m2 is not even) preclude t∈Zm\mathbf t \in \mathbb Z^m3 time algorithms. However, barriers for even t∈Zm\mathbf t \in \mathbb Z^m4 (in particular, Euclidean norm t∈Zm\mathbf t \in \mathbb Z^m5) were not established, and this result shows explicit progress, demonstrating the Euclidean case is algorithmically easier in this restricted setting.
  • Minimum-Weight t∈Zm\mathbf t \in \mathbb Z^m6-Clique and APSP: It is widely conjectured that minimum-weight t∈Zm\mathbf t \in \mathbb Z^m7-clique remains t∈Zm\mathbf t \in \mathbb Z^m8-hard. Any breakthrough in t∈Zm\mathbf t \in \mathbb Z^m9-clique detection (e.g., faster than d>0d > 00) would imply faster d>0d > 01-d>0d > 02 solutions, thus providing cryptographic support to the d>0d > 03-clique and, by implication, APSP hardness conjectures.

5. Broader Connections and Cryptographic Consequences

A fine-grained equivalence exists between d>0d > 04-d>0d > 05 (for even d>0d > 06) and Weighted Max-d>0d > 07-SAT, meaning improvements for one translate precisely to the other (up to polynomial factors). This establishes direct links between lattice-based cryptographic hardness assumptions (in the worst case) and central fine-grained complexity conjectures—notably SETH (Strong Exponential Time Hypothesis), MAX-SAT, minimum clique, and APSP.

Additionally, prior fine-grained hardness reductions for general d>0d > 08 all make essential use of the d>0d > 09 case. Consequently, the new algorithms show that, in the Euclidean setting, such restricted reductions cannot exclude sub-(0,1)(0,1)0 time; any unconditional lower bound must leverage the structure of general coefficients outside (0,1)(0,1)1.

6. Summary and Impact

The discovery of a sub-(0,1)(0,1)2 exact algorithm for (0,1)(0,1)3-(0,1)(0,1)4 establishes a new frontier for both the algorithmic understanding of lattice problems and their connections to classical hypotheses in fine-grained complexity. The equivalences and reductions presented inform ongoing debates regarding the fine-grained hardness of cornerstone combinatorial and cryptographic problems, illustrating how algorithmic advances in specialized lattice settings feed directly into broader complexity-theoretic landscapes (Abboud et al., 7 Jan 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to (0,1)-CVP.