Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 96 TPS
Gemini 2.5 Pro 39 TPS Pro
GPT-5 Medium 36 TPS
GPT-5 High 36 TPS Pro
GPT-4o 74 TPS
GPT OSS 120B 399 TPS Pro
Kimi K2 184 TPS Pro
2000 character limit reached

Gaussian Boson Sampling: Theory & Applications

Updated 5 August 2025
  • Gaussian Boson Sampling is a photonic quantum computation scheme that leverages squeezed vacuum states and a passive linear interferometer to sample photon-number distributions with probabilities determined by the matrix hafnian.
  • It enhances experimental efficiency by directly using squeezed states, which increases multiphoton event probabilities and reduces resource overhead compared to traditional Fock-based schemes.
  • The computational complexity of GBS is rooted in the #P-hard nature of hafnian evaluations, suggesting classical simulation is intractable and making it a strong candidate for demonstrating quantum advantage.

Gaussian Boson Sampling (GBS) is a photonic quantum computational scheme that samples from the photon-number distribution of multimode Gaussian states—specifically, those comprising squeezed vacuum states injected into a passive linear interferometer. In contrast to traditional Boson Sampling, which relies on single-photon Fock states and whose output probabilities are governed by the matrix permanent, GBS employs the full nonclassical structure of squeezed states, leading output probabilities to depend on the matrix hafnian. The computational task of sampling from the resulting distribution is conjectured to be classically intractable due to #P-hardness properties, and GBS is distinguished by its experimental efficiency and potential for near-term photonic quantum advantage (Hamilton et al., 2016).

1. Theoretical Framework and Definition

GBS is mathematically defined by the probability of observing a photon number pattern n=(n1,...,nM)\vec{n} = (n_1, ..., n_M) at the interferometer outputs: Pr(n)=1σHaf(AS)2n1!n2!nM!\mathrm{Pr}(\vec{n}) = \frac{1}{\sqrt{|\sigma|}} \frac{|\mathrm{Haf}(A_S)|^2}{n_1! n_2! \dots n_M!} where σ\sigma is the covariance matrix of the overall Gaussian state, ASA_S is a submatrix constructed from the interferometer transformation and the squeezing parameters, and the hafnian encodes weighted sums over perfect matchings. In the case of single-mode squeezed inputs and collision-free detection events (at most one photon per mode), ASA_S is a symmetric matrix whose entries are determined by the mode transformations and input squeezing.

The sampling problem is: given a specification of the Gaussian input (squeezing, displacement), linear optics network (passive unitary or sub-unitary), and ideal photon-number-resolving detectors, generate samples from this probability distribution.

2. Squeezed States as Quantum Resource

Single-mode squeezed vacuum states are nonclassical resources defined by reduced quadrature noise below the vacuum limit. In the GBS protocol, KNK \approx N squeezed input states (with mean photon number n1\langle n \rangle \lesssim 1) replace probabilistically heralded single-photon sources. This direct usage enables the retention and exploitation of higher-order Fock components, sharply increasing the overall multiphoton event generation probability. The nonclassical character of squeezing ensures that the output statistics are sensitive to quantum interference effects not present in classical light, and is fundamental to the computational complexity of the GBS output distribution (Hamilton et al., 2016).

3. Hafnian Function and Complexity

The theoretical advance underlying GBS is the identification of the hafnian as the matrix function determining output probabilities. For an nn-mode Gaussian state,

Pr(n)Haf(AS)2\mathrm{Pr}(\vec{n}) \propto |\mathrm{Haf}(A_S)|^2

where the hafnian, for a 2n×2n2n\times 2n symmetric matrix AA, is given by

Haf(A)=MPM(2n)(i,j)MAi,j\mathrm{Haf}(A) = \sum_{M \in \text{PM}(2n)} \prod_{(i, j) \in M} A_{i,j}

with the sum running over all perfect matchings MM. The hafnian generalizes the permanent, as for a block-structured matrix $\left(\begin{smaLLMatrix} 0 & G \ G^T & 0 \end{smaLLMatrix}\right)$ one recovers Perm(G)=Haf()\mathrm{Perm}(G)=\mathrm{Haf}(\cdot). Evaluating the hafnian is known to be #P-hard, and the paper shows that approximating GBS output probabilities to multiplicative error is at least as hard as for permanents.

In system sizes relevant for near-term experiments (e.g., M100M \sim 100, N30N \sim 30), classical sampling from GBS distributions is expected to be infeasible, subject to complexity conjectures aligning with those in standard Boson Sampling.

4. Experimental Protocol and Implementation

A GBS experiment comprises:

  • Input: KNK \sim N single-mode squeezed vacuum states, typically generated using parametric down-conversion or waveguided nonlinear crystals.
  • Interferometer: An MM-mode linear optical network described by a random (e.g., Haar-distributed) unitary matrix TT with MN2M \gtrsim N^2, designed to distribute input photons and suppress multi-photon occupancy per mode.
  • Detection: Photon-number-resolving detectors (e.g., superconducting nanowire devices) measure the output in the Fock basis. The architecture is arranged so that most output patterns are collision-free.

Resource and scaling requirements are as follows:

Parameter GBS Requirement Postselected Fock BS Scattershot BS
Number of inputs KNK \sim N squeezed states NN single photons N2N^2 two-mode squeezed states
Number of modes MN2M \sim N^2 MN2M \sim N^2 MN2M \sim N^2
Sampling space Reduced (unconditioned on input locations) Binomial over outputs Binomial over N2N^2 inputs

Experimental challenges include achieving low-loss, high-purity squeezed states, maintaining high interferometer stability/rank (rankBN\text{rank}\,B \sim N), tuning squeezing parameters to optimal gain, handling dark counts, and scaling detection arrays.

5. Advantages over Fock Boson Sampling

GBS improves upon previous photonic sampling schemes through several mechanisms:

  • Generation Probability: The use of squeezed states directly, rather than as heralded single photons, leads to exponentially higher probabilities for relevant multiphoton events—according to the negative binomial vs. binomial statistics of GBS and Fock inputs, respectively.
  • Sampling Space Reduction: GBS does not require conditioning on specific photon input locations, avoiding the exponential scaling of the space of valid input-output configurations seen in Scattershot Boson Sampling.
  • Resource Efficiency: Fewer squeezed states and modes are required to reach comparable photon-numbers and event rates, mainly because all multi-photon configurations contribute to valid samples, rather than the strictly rare "one-photon-per-mode" events isolated in conventional boson sampling.
  • Measurement Time: The sample space reduction directly lessens the measurement time required to obtain meaningful statistics.

6. Computational Hardness and Complexity-Theoretic Status

The output probabilities in GBS are #P-hard to compute because they are functions of the hafnian. By analogy with Boson Sampling's complexity-theoretic argument (where efficient simulation would collapse the polynomial hierarchy), GBS inherits this hardness under the assumption that approximating the hafnian on typical Gaussian matrices is classically infeasible.

The paper articulates two key conjectures for approximate (as opposed to exact) GBS to inherit #P-hardness:

  • Hafnian-of-Gaussians Conjecture: Approximating the hafnian of a Gaussian matrix is #P-hard even up to multiplicative error.
  • Anti-concentration Conjecture: Hafnians of typical Gaussian matrices are not concentrated near zero, which is necessary for Stockmeyer-type reductions to go through.

These conjectures are critical to ruling out potential weaknesses in the hardness argument for near-term devices and remain an open area for rigorous proof.

7. Future Directions and Open Problems

Open research questions and prospects outlined include:

  1. Approximate sampling complexity: Establishing the hardness of GBS sampling to within multiplicative/additive error in the output probabilities, supported by rigorous proofs akin to the original Boson Sampling hardness arguments.
  2. Loss and noise tolerance: Determining the threshold loss and noise levels that preserve quantum computational advantage, beyond which the output statistics admit efficient classical descriptions (potentially entering the "classical simulable" regime).
  3. Protocol Extensions: GBS encompasses a family of protocols, unifying scattershot, photon-added/subtracted, and displacement-based schemes, and holds promise for further generalizations (e.g., reverse GBS—Fock input with Gaussian measurements).
  4. Applications: GBS's structure suggests direct applications in molecular vibronic spectroscopy and graph-theoretic problems (e.g., dense subgraph identification), owing to the mapping between detection statistics and combinatorial features (hafnian counts) of encoded matrices.

A plausible implication is that the rigorous elucidation of the hafnian's average-case complexity and anti-concentration properties will critically determine the future of GBS as a near-term candidate for demonstrating quantum advantage.


In summary, Gaussian Boson Sampling is defined by sampling photon-number patterns at the output of a passive linear interferometer fed by squeezed vacuum states, where output probabilities are determined by the matrix hafnian. This method offers exponential improvements in event generation and resource efficiency and encodes computational problems that are #P-hard, standing as a leading approach for demonstrating quantum advantage with photonic devices (Hamilton et al., 2016). Future work is directed toward solidifying its complexity-theoretic foundation under realistic noise and loss, extending its protocol family, and exploiting new applications in high-dimensional quantum simulation and combinatorial optimization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)