Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 91 tok/s
Gemini 3.0 Pro 46 tok/s Pro
Gemini 2.5 Flash 148 tok/s Pro
Kimi K2 170 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Stochastic Quantization Framework

Updated 10 September 2025
  • Stochastic quantization framework is a method that connects quantum field theory with stochastic processes by evolving fields along a discretized, fictitious time dimension.
  • The introduction of a weighted noise average corrects discretization artifacts, ensuring that noise-averaged correlation functions match QFT predictions even at finite step sizes.
  • Validated through perturbative analysis and zero-dimensional numerical simulations, the approach offers a promising path for efficient, nonperturbative QFT simulations.

Stochastic quantization is a formulation that connects quantum field theory (QFT) and stochastic processes by evolving fields along an extra, fictitious time direction governed by a Langevin equation. In the standard Parisi–Wu framework, this fictitious time is continuous and requires extrapolation to the continuum limit to guarantee correspondence with quantum correlation functions. The stochastic quantization framework discussed here introduces a discretized fictitious time and modifies the noise average by an explicit weight factor. This adjustment ensures that, in the large time limit, the noise-averaged correlation functions coincide exactly with those of the target QFT, even at finite, nonzero step size of the fictitious time discretization. The method is validated both perturbatively and numerically in a zero-dimensional toy model, avoiding the systematic errors associated with the usual need for a continuum limit.

1. Discrete Langevin Dynamics and Motivation for Weighted Noise Averages

The discretized stochastic quantization scheme defines a lattice in fictitious (Langevin) time with step size ϵ\epsilon, so that tn=nϵt_n = n\epsilon, n=0,1,,Nn=0,1,\ldots,N, and stochastic fields ϕn(x)\phi_n(x) at each time slice. The discrete Langevin equation is

ϕn(x)ϕn1(x)ϵ=Wn(x)+ηn(x),\frac{\phi_n(x) - \phi_{n-1}(x)}{\epsilon} = -W_n(x) + \eta_n(x)\,,

where ηn(x)\eta_n(x) is Gaussian noise with covariance

ηn(x)ηm(y)=2ϵδnmδ(d)(xy).\langle\eta_n(x)\eta_m(y)\rangle = \frac{2}{\epsilon} \delta_{nm} \delta^{(d)}(x-y)\,.

Wn(x)W_n(x) is a discretized force term (e.g., for a scalar theory, approaching ϕ+V(ϕ)-\square\phi + V'(\phi) as ϵ0\epsilon \to 0). Discretization ambiguities permit adopting different conventions for the force Wn(x)W_n(x) in the update equation and, separately, for Wn(x)\overline{W}_n(x) in the path-integral formulation.

Performing the standard change of variables from noise ηn\eta_n to fields ϕn\phi_n via the Nicolai map introduces a Jacobian determinant MM depending on the chosen WnW_n. At finite ϵ\epsilon, the statistical weight is not preserved under this change, leading to systematic discrepancies (on the lattice) between the long-fictitious-time average and the target QFT correlation functions.

To resolve this, the scheme introduces a modified noise average

O[ϕ]η,w=1Zη,w[dη]O[ϕη]w[ϕη]exp{ϵ4n=1Nddxηn(x)2},\langle\mathcal{O}[\phi]\rangle_{\eta,w} = \frac{1}{Z_{\eta,w}} \int [d\eta]\, \mathcal{O}[\phi_\eta]\, w[\phi_\eta]\, \exp\left\{-\frac{\epsilon}{4}\sum_{n=1}^N \int d^dx\,\eta_n(x)^2\right\},

where the weight w[ϕ]w[\phi] is computed in terms of the two discretizations WnW_n and Wn\overline{W}_n and their Jacobians, ensuring the correct continuum limit and, crucially, exactness at any finite fictitious time step in the large time limit.

2. Construction and Role of the Weight Factor

The weight factor w[ϕ]w[\phi] is explicitly constructed as

w[ϕ]=detMdetMexp{ϵ4n=1Nddx[Wn2(x)Wn2(x)+2ϕn(x)(Wn(x)+Wn(x))]SQFT[ϕN]+SQFT[ϕ0]},w[\phi] = \frac{\det\overline{M}}{\det M} \exp\Bigg\{\frac{\epsilon}{4}\sum_{n=1}^{N}\int d^dx\,\Big[W_n^2(x)-\overline{W}_n^2(x)+2\nabla\phi_n(x)(W_n(x)+\overline{W}_n(x))\Big] - S_{\rm QFT}[\phi_N] + S_{\rm QFT}[\phi_0]\Bigg\},

where MM and M\overline{M} are matrices built from the discretizations Wn(x)W_n(x) and Wn(x)\overline{W}_n(x), respectively, and ϕn(x)=[ϕn(x)ϕn1(x)]/ϵ\nabla\phi_n(x) = [\phi_n(x) - \phi_{n-1}(x)]/\epsilon. SQFT[ϕ]S_{\rm QFT}[\phi] is the QFT action at the current fictitious time slice. This form restores (or nearly restores) a Q\overline{Q} supersymmetry at finite lattice spacing, underpinning the equivalence proof. By appropriate choice of WnW_n and Wn\overline{W}_n (e.g., both converging to the same continuum force and with suitably matching determinants so that detM/detM=1+O(ϵ)\det\overline{M}/\det M = 1 + \mathcal{O}(\epsilon)), the weight factor w[ϕ]w[\phi] approaches $1$ for ϵ0\epsilon \rightarrow 0, but enforces exactness for any ϵ>0\epsilon>0.

3. Main Theorem: Equivalence of Discrete-Time Weighted Stochastic Quantization and QFT

The central result is the equivalence theorem: φ(x1)φ(x)QFT=limNϕη,N(x1)ϕη,N(x)η,w,\langle\varphi(x_1)\cdots\varphi(x_\ell)\rangle_{\rm QFT} = \lim_{N\to\infty} \langle \phi_{\eta,N}(x_1)\cdots\phi_{\eta,N}(x_\ell)\rangle_{\eta,w}, establishing that the large-fictitious-time (NN\to\infty) limit of the weighted stochastic process reproduces QFT correlation functions exactly, even at fixed, finite ϵ\epsilon. This holds irrespective of the particular discretization used, provided the weight factor is constructed as above.

The necessity of the weight factor w[ϕ]w[\phi] is grounded in the algebraic structure of lattice supersymmetry: while the QQ supersymmetry is preserved on the lattice, the Q\overline{Q} supersymmetry is generically broken unless ϵ0\epsilon\rightarrow 0. The weight factor corrects for this breaking and is derived by tracking the variation of the path-integral measure and action under the Nicolai map.

4. Numerical and Perturbative Validation in a Zero-Dimensional Model

The method is tested on a zero-dimensional system where the path-integral reduces to a one-dimensional integral: Z=dxeV(x),V(x)=12m2x2+14λx4,Z = \int_{-\infty}^{\infty} dx\, e^{-V(x)},\quad V(x) = \frac{1}{2} m^2 x^2 + \frac{1}{4}\lambda x^4, with observables x2p\langle x^{2p}\rangle known analytically and via perturbative expansion, e.g., x2p=(2p1)!!/m2p[1p(p+2)(λ/m4)+O(λ2)]\langle x^{2p}\rangle = (2p-1)!! / m^{2p} \left[1 - p(p+2)(\lambda/m^4) + \mathcal{O}(\lambda^2)\right]. Two types of drift discretizations are considered, A-type (Stratonovich-inspired) and B-type (cyclic Leibniz rule).

For each drift, the weight factor and discrete Langevin updates are specified (e.g., for B-type, w(B)w^{\rm (B)} is a local function involving products of terms in ϕn\phi_n and ϕn1\phi_{n-1}, as detailed in the data). Observable averages are then computed as

ϕN2pη,w=ϕN2pw[ϕ]ηw[ϕ]η.\langle\phi_N^{2p}\rangle_{\eta,w} = \frac{\langle\phi_N^{2p} w[\phi]\rangle_\eta}{\langle w[\phi]\rangle_\eta}.

Perturbative analysis confirms that this procedure reproduces the exact expansion to O(λ)\mathcal{O}(\lambda) for any ϵ\epsilon. Numerical simulations at strong and weak coupling, varying ϵ\epsilon and total time τ=Nϵ\tau = N\epsilon, show that unweighted averages incur systematic errors for coarse ϵ\epsilon, while the weighted method yields results independent of ϵ\epsilon. The improvement is especially notable in the strong-coupling regime.

5. Theoretical and Practical Implications

The discrete-time stochastic quantization framework with the weight factor offers several advantages:

  • It removes the necessity for numerically expensive extrapolation in the fictitious time continuum limit; correct QFT results are obtained directly for any finite ϵ\epsilon in the large-time limit.
  • It accommodates different drift discretizations, allowing algorithmic flexibility.
  • In the zero-dimensional model, both perturbative and numerical results confirm the efficacy of the weighting, suggesting similar applicability in higher-dimensional, nontrivial models, provided the weight factor remains tractable.
  • The approach fundamentally relies on the structure of lattice supersymmetry, generalizing Nicolai map arguments and ensuring that lattice artifacts can be exactly compensated.

A plausible implication is that this methodology can be generalized to more complex systems where standard lattice stochastic quantization is computationally challenging due to the need for small ϵ\epsilon. The framework is particularly promising for efficient simulation and for the paper of nonperturbative regimes.

6. Key Formulas and Definitions

The principal mathematical constructs in the framework are as follows:

  • Discrete Langevin equation:

ϕn(x)ϕn1(x)ϵ=Wn(x)+ηn(x).\frac{\phi_n(x)-\phi_{n-1}(x)}{\epsilon} = -W_n(x) + \eta_n(x)\,.

  • Weighted noise average:

O[ϕ]η,w=1Zη,w[dη]O[ϕη]w[ϕη]exp{ϵ4n=1Nddxηn(x)2}.\langle\mathcal{O}[\phi]\rangle_{\eta,w} = \frac{1}{Z_{\eta,w}} \int [d\eta]\, \mathcal{O}[\phi_\eta]\, w[\phi_\eta]\, \exp\left\{-\frac{\epsilon}{4}\sum_{n=1}^N \int d^dx\,\eta_n(x)^2\right\}.

  • Weight factor (schematically):

w[ϕ]=detMdetMexp{ϵ4n=1Nddx[Wn2(x)Wn2(x)+2ϕn(x)(Wn(x)+Wn(x))]SQFT[ϕN]+SQFT[ϕ0]}.w[\phi] = \frac{\det\overline{M}}{\det M} \exp\left\{\frac{\epsilon}{4}\sum_{n=1}^N \int d^dx\,\left[W_n^2(x)-\overline{W}_n^2(x)+2\nabla\phi_n(x)(W_n(x)+\overline{W}_n(x))\right] - S_{\rm QFT}[\phi_N] + S_{\rm QFT}[\phi_0]\right\}.

  • Equivalence theorem:

φ(x1)φ(x)QFT=limNϕη,N(x1)ϕη,N(x)η,w.\langle\varphi(x_1)\cdots\varphi(x_\ell)\rangle_{\rm QFT} = \lim_{N\to\infty} \langle \phi_{\eta,N}(x_1)\cdots\phi_{\eta,N}(x_\ell)\rangle_{\eta,w}.

7. Summary and Perspective

The discrete-time stochastic quantization framework with a corrective weight factor enables the exact recovery of QFT correlation functions in the large-time limit without requiring the continuum ϵ0\epsilon \to 0 limit in the fictitious time coordinate. This is achieved by compensating for discretization artifacts at the level of the noise average through an analytically constructed weight based on matching drift discretizations. The approach is verified both analytically in perturbation theory and by numerical experiment in a zero-dimensional toy model, with strong evidence for improved accuracy, especially under coarse discretization or strong coupling. This framework offers a principled and potentially generalizable method for efficient stochastic quantization simulations across a range of quantum field theories (Kadoh et al., 24 Jan 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Stochastic Quantization Framework.