Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantum k-Nearest Neighbour (kNN)

Updated 17 March 2026
  • Quantum kNN is a quantum generalization of classical kNN that uses quantum parallelism and amplitude amplification for faster nearest-neighbor search.
  • The method employs various quantum data encoding strategies (amplitude, angle, binary) to efficiently compute distances and classify high-dimensional data.
  • The GB-QkNN variant combines classical granular-ball reduction with quantum HNSW graph search, achieving polylogarithmic query complexity and improved scalability.

Quantum k-Nearest Neighbour (kNN) algorithms are quantum generalizations of the classical kNN method for pattern classification, regression, and clustering. These protocols exploit quantum parallelism, @@@@1@@@@, and efficient quantum distance computation to accelerate nearest-neighbour search, providing resource and (in certain regimes) asymptotic complexity advantages over conventional classical approaches. A diverse landscape of quantum kNN architectures exists, varying by data encoding, similarity metric, search subroutines, and hybridization with classical pre- and post-processing. The following sections detail the canonical mechanisms, state-of-the-art developments, and comparative computational attributes of quantum kNN as evidenced in recent literature, with special focus on the Granular-Ball based Quantum kNN (GB-QkNN) algorithm (Xia et al., 29 May 2025).

1. Quantum Data Encoding, Distance Metrics, and State Preparation

Quantum kNN algorithms begin by embedding classical or quantum data into quantum registers. The encoding method critically impacts resource overhead, circuit complexity, and noise resilience:

  • Amplitude Encoding: Each classical vector xRdx\in\mathbb R^d is represented as a normalized quantum state x=x1j=0d1xjj\lvert x\rangle = \|x\|^{-1}\sum_{j=0}^{d-1} x_j \lvert j\rangle. This encoding, common in several proposals (Zardini et al., 2022, Dang et al., 2018, Basheer et al., 2020), supports distance estimation via swap tests or inner product evaluations.
  • Angle/Rotation Encoding: Encoding features via basis rotations, such as Rz(θi)R_z(\theta_i) for feature xix_i, or composite entangling circuits with IsingXY/CNOT layers to amplify nonlinearities and enhance class separability (Ronggon et al., 9 May 2025).
  • Explicit/Binary Encoding: For binary or binarized data, bit-wise mapping to the computational basis is frequently used, especially in Hamming-distance protocols (Sharma, 2020, Li et al., 2021, Quezada et al., 2022).
  • Coherent-State and Photonic Encoding: In quantum-optical kNN, numeric features are rescaled to phase shifts and encoded into coherent states across multiple optical modes (Mehta et al., 2024).

Similarity measurements, or "distances", employ metrics fitted to the data embedding. The swap test is employed for fidelity/cosine metrics (xy2|\langle x|y\rangle|^2) (Zardini et al., 2022, Ronggon et al., 9 May 2025, Basheer et al., 2020); Euclidean distances may be reconstructed via transformations of inner products (Zardini et al., 2023, Xia et al., 29 May 2025). In quantum-optical kNN the distance metric is d(x,y)=k[1cos(θkxθky)]d(x, y) = \sum_k [1 - \cos(\theta^x_k - \theta^y_k)], evaluated by photonic interference (Mehta et al., 2024).

2. Quantum Speedup: Core Search and Approximate Algorithms

Quantum acceleration in kNN arises from two main ingredients: massive parallelism in distance computation and quantum-enhanced minimum/maximum search.

  • Grover-Based Search: Many kNN protocols invoke Grover's search or its generalizations (e.g., Dürr–Høyer k-minima/maxima) to extract indices of the kk closest points with O(kM)O(\sqrt{kM}) query complexity (Dang et al., 2018, Basheer et al., 2020, Sharma, 2020, Li et al., 2021). This enables quadratic speedup over classical O(kM)O(kM) selection in large datasets.
  • Quantum Sorting: Alternative approaches employ quantum sorting algorithms, trading off between memory and circuit depth (parameters m,pm,p), achieving sublinear scaling in NN provided data can be loaded efficiently into quantum memory (Quezada et al., 2022).
  • Polylogarithmic Search via Data Reduction: GB-QkNN (Xia et al., 29 May 2025) introduces a hybrid scheme: classical granular-ball reduction compresses the dataset (NN) to a small summary of balls (MNM \ll N), over which a quantum-accelerated Hierarchical Navigable Small World (HNSW) graph is built. Quantum subroutines are then used for neighbor search, yielding O(log2M)O(\log^2 M) query time.

Comparison Table: Quantum kNN Complexity

Algorithm Build Complexity Query Complexity
Brute-force kNN O(NlogN)O(N\log N) O(Nd)O(Nd)
HNSW (classical) O(NlogN)O(N\log N) O(logN)O(\log N)
FQkNN (Grover) O(kN)O(\sqrt{kN})
GB-QkNN O(cdN+MlogM)O(cd\,N + M\log M) O(log2M)O(\log^2 M)

As MNM \ll N in GB-QkNN, this framework attains lower asymptotic query cost than both classical and quantum kNNs operating on full datasets.

3. Granular-Ball Quantum kNN (GB-QkNN): Architecture and Mechanisms

The GB-QkNN architecture (Xia et al., 29 May 2025) realizes an efficient, quantum-accelerated approximate kNN pipeline with the following stages:

  1. Granular-Ball Data Reduction (Classical):
    • Partition the data D={x1,...,xN}RdD = \{x_1, ..., x_N\} \subset \mathbb{R}^d into granular balls {Gi}\{\mathcal{G}_i\} based on class purity (threshold TT).
    • Each ball Gi\mathcal{G}_i is defined by center C=(1/n)jIxjC = (1/n) \sum_{j\in I} x_j, radius R=(1/n)jIxjCR = (1/n) \sum_{j\in I} \|x_j - C\|, and splits recursively if purity <T<T.
    • The procedure outputs M(N)M\,(\ll N) centers; time complexity O(cdN)O(c\,d\,N).
  2. Quantum HNSW Construction:
    • Build a small-world graph over {Ci}\{C_i\} (granular centers) in layered fashion; for each new node, assign initial layer LL, greedily link down via quantum neighbor selection.
    • Each neighbor-search employs quantum distance and comparator circuits, leveraging parallelism to select the mm closest among candidates.
  3. Quantum Distance Evaluation:
    • Leverages angle encoding and swap tests to recover xy2=x2+y22x,y\|x - y\|^2 = \|x\|^2 + \|y\|^2 - 2\langle x, y \rangle, where the inner product is computed as ψxψy2=2p01|\langle \psi_x|\psi_y \rangle|^2 = 2p_0-1 (from swap test output p0p_0).
    • Key resource costs: QRAM access O(logM)O(\log M), controlled-RyR_y (angle) encoding O(d)O(d) (depth O(1)O(1) in parallel), swap test O(d)O(d) Toffolis (depth O(1)O(1) in QPE).
  4. Quantum Comparator Circuit:
    • Comparator on qq-bit registers outputs a flag marking the smaller distance using O(q)O(q) CNOT gates.
  5. Query/Search Algorithm:
    • For a test point, descent down the HNSW levels requires O(logM)O(\log M) layers, each layer employing quantum parallel minimum-finding; net query complexity O(log2M)O(\log^2 M).

4. Resource and Complexity Analysis

Resource demands and asymptotic scaling for GB-QkNN (Xia et al., 29 May 2025):

  • Build/Preprocessing: O(cdN+MlogM)O(c\,d\,N + M\log M) (granular-ball formation plus quantum HNSW build; mMm \ll M).
  • Query Cost: O((m+logM)logM)O(log2M)O((m+\log M)\log M) \sim O(\log^2 M) per query (polylogarithmic in MM).
  • Qubit Count: s=log2Ms=\lceil\log_2 M\rceil (address), d×tad\times t_a (feature encoding), m+qm+q (neighbors, comparators); Qtotal=O(s+dta+m+q)Q_{\rm total}=O(s + d\,t_a + m + q).
  • Circuit Depth: Each layer O(max{logM,ta,q})O(\max\{\log M, t_a, q\}); total query O(logMmax{...})O(\log M \max\{...\}).

Prerequisite architectural assumptions include fast, fault-tolerant QRAM (open problem at large scale), reliable controlled-rotation and comparator gates, and error control for O(logM)O(\log M) sequential quantum operations.

A direct comparison with quantum Grover-based and classical nearest-neighbor methods shows GB-QkNN has superior asymptotic query time when MNM \ll N.

5. Algorithmic Diversity: Quantum kNN Variants and Practical Performance

Beyond GB-QkNN, multiple quantum kNN instantiations exist, differentiated by choice of encoding, metric, and search strategy:

  • Swap Test and Amplitude Encoding: As in (Zardini et al., 2022, Dang et al., 2018, Ronggon et al., 9 May 2025), the swap test estimates fidelities, and amplitude encoding supports parallel distance computation. Shot noise requires O(ε2)O(\varepsilon^{-2}) measurements for estimation error ε\varepsilon.
  • Quantum Euclidean Distance Estimation: Low-qubit protocols based on amplitude encoding and shallow "Bell-H" tests avoid SWAP circuits, estimating uju2\|\mathbf{u}_j-\mathbf{u}'\|^2 directly (Zardini et al., 2023).
  • Quantum Optical kNN: Phase-encoded multimode coherent states and photonic interference enable simultaneous NN-feature distance extraction per run, limited by the number of photonic elements and optical loss (Mehta et al., 2024).
  • Quantum Sorting: Circuit design combining mm parallel copies and pp rounds of Grover-style amplitude amplification can reduce query scaling to O(knmp)O(k n m p), with m,pm,p traded for qubit/memory versus circuit depth constraints (Quezada et al., 2022).
  • Error Mitigation: QKNN protocols with repetition encoding (e.g., 3-qubit repetition code) sustain classification fidelity for depolarizing noise p0.20.3p\lesssim 0.2-0.3 (Ronggon et al., 9 May 2025).
  • Practical Application: Empirical studies on UCI and image classification benchmarks (Zardini et al., 2022, Dang et al., 2018, Ronggon et al., 9 May 2025) confirm that quantum kNN can match classical accuracy in ideal (noise-free, large shot-count) settings, with quantum variants outperforming or matching advanced classical methods (e.g., random forest, SVM) in some regression/classification metrics, though current hardware noise and measurement constraints remain a bottleneck.

6. Limitations, Open Problems, and Research Directions

Principal limitations and open technical challenges:

  • QRAM Scalability: Efficient, large-scale QRAM is assumed in many proposals (Xia et al., 29 May 2025, Basheer et al., 2020, Li et al., 2021) but remains technologically unresolved. Without QRAM, state preparation overhead may erase asymptotic speedups.
  • Error Propagation and NISQ Feasibility: Quantum circuits for swap test, amplitude amplification, and comparator oracles accumulate gate and measurement errors; error mitigation, repetition encoding, or shallow circuit design are active research topics (Ronggon et al., 9 May 2025).
  • High-Dimensional Encoding: Angle/amplitude-encoding scale poorly with dd, especially when dd is large and encoding needs exponential numbers of parameters/rotations (Xia et al., 29 May 2025, Ronggon et al., 9 May 2025).
  • Data Loading and Readout: Measurement overhead (M=O(1/ε2)M=O(1/\varepsilon^2) for error ε\varepsilon) and the classical post-processing required for k-nearest neighbor extraction limit near-term advantage (Ronggon et al., 9 May 2025, Zardini et al., 2022).

Future work includes robust error mitigation, hybrid quantum-classical voting strategies, improved distance metrics (e.g., Mahalanobis or kernel-based quantum metrics), scalable embedding/block-encoding, and resource-aware variants tailored for NISQ architectures.

7. Summary of Advances and Outlook

Quantum kNN represents a rich family of methods extending classical nearest-neighbour techniques to quantum data, leveraging quantum superposition for massive parallelism in distance computation and quantum search/sort for sublinear or even polylogarithmic neighbor extraction. Recent advances, such as GB-QkNN (Xia et al., 29 May 2025), illustrate the power of merging classical preprocessing (granular-ball reduction) with quantum-accelerated graph-based search (quantum HNSW), reducing query complexity to O(log2M)O(\log^2 M)—a clear improvement over both classical and prior quantum approaches. Ongoing challenges include practical quantum memory (QRAM), error management, and efficient embedding in high-dimensional feature spaces. Further progress in hybrid quantum architectures and error-resilient circuit design will dictate the applicability of quantum kNN to large-scale, real-world tasks.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quantum k-Nearest Neighbour (kNN).