Papers
Topics
Authors
Recent
2000 character limit reached

Density-to-Potential Map in DFT

Updated 5 December 2025
  • Density-to-Potential Map is a functional mapping that reconstructs the external potential from a given density profile, fundamental to both classical and quantum DFT.
  • Neural operator architectures like FNO and DeepONet enable efficient, low-error inversion of density-to-potential relationships across varied physical systems.
  • Practical applications span confined fluids, adsorption phenomena, and quantum systems, though challenges remain in handling extrapolation and degeneracy limits.

A density-to-potential map is a functional or operator that reconstructs the external potential responsible for generating a prescribed density profile, typically in the context of equilibrium or dynamical many-body systems. Such maps are fundamental to the theoretical structure of density functional theory (DFT)—classical or quantum—and underpin inversion, control, and surrogate modeling across statistical mechanics, quantum chemistry, and condensed matter physics. In recent years, neural operator architectures and convex analysis have broadened the scope and tractability of density-to-potential mappings in high-dimensional and data-driven settings.

1. Definition and Formal Structure in Classical Density Functional Theory

In classical DFT (cDFT), the equilibrium one-body density ρ(x)\rho(x) and the external potential Vext(x)V_{\rm ext}(x) are linked by the Euler–Lagrange equation,

ρ(x)ΛD=exp[β(Vext(x)μ)+c1(x)],\rho(x)\Lambda^D = \exp\left[-\beta(V_{\rm ext}(x)-\mu) + c_1(x)\right],

where Λ\Lambda is the thermal wavelength, μ\mu is the chemical potential, β=1/(kBT)\beta=1/(k_BT), and c1(x)c_1(x) is the one-body direct correlation function, itself a functional derivative of the excess free energy Fex[ρ]F_{\rm ex}[\rho]: c1(x)=δβFex[ρ]δρ(x).c_1(x) = -\,\frac{\delta \beta F_{\rm ex}[\rho]}{\delta \rho(x)}. The density-to-potential map aims to reconstruct Vext(x)V_{\rm ext}(x) from ρ(x)\rho(x). Explicitly,

βVext(x)=βμln[ρ(x)ΛD]+c1(x).\beta V_{\rm ext}(x) = \beta\mu - \ln[\rho(x)\Lambda^D] + c_1(x).

Thus, learning or computing the map ρc1\rho \rightarrow c_1 suffices to yield ρVext\rho \rightarrow V_{\rm ext} via a closed algebraic step (Pan et al., 7 Jun 2025).

2. Neural Operator Architectures for Density-to-Potential Mapping

Recent advances leverage neural operator frameworks for modeling functional relationships between density and potential profiles. Several architectures have demonstrated efficacy:

  • Deep Operator Network (DeepONet): Decomposes the mapping into a branch net (acting on input density samples) and a trunk net (acting on spatial coordinates), combining outputs as

[G(ρ)](x)=k=1pbkτk(x)+b0,[\mathcal{G}(\rho)](x) = \sum_{k=1}^p b_k \tau_k(x) + b_0,

with residual multiscale convolutional variants (RMSCNN) and trainable Gaussian kernels yielding best performance among DeepONet designs.

  • Fourier Neural Operator (FNO): Employs spectral convolution and truncation in the Fourier domain, maintaining translation invariance and robust generalization under varying confining potentials. The iterative layer update reads

zj=σ(Wzj1+F1[RF(zj1)]),z^{j} = \sigma\left(W z^{j-1} + \mathcal{F}^{-1}[R \mathcal{F}(z^{j-1})]\right),

where learned weights RR act on retained modes.

Benchmarking in 1D hard-rod fluids (domain [0,8][0,8]) reveals that FNO achieves lowest mean squared error (MSE), particularly with squared ReLU (sRelu) activation, for both interpolation and extrapolation (Pan et al., 7 Jun 2025).

Performance Table for ρc1\rho\to c_1 Mapping (MSE, Training / In-Group Test / Extrapolation)

Architecture Training MSE In-Group MSE Extrapolation MSE
Full-range DNN 5.8×1045.8 \times 10^{-4} 5.8×1045.8 \times 10^{-4} 2.2×1022.2 \times 10^{-2}
Quasi-local DNN 1.3×1071.3 \times 10^{-7} 1.5×1071.5 \times 10^{-7} 7.8×1037.8 \times 10^{-3}
DNN-DeepONet 8.7×1058.7 \times 10^{-5} 2.2×1042.2 \times 10^{-4} 8.6×1038.6 \times 10^{-3}
GK-RMSCNN-DeepONet 3.4×1053.4 \times 10^{-5} 4.4×1054.4 \times 10^{-5} 9.6×1049.6 \times 10^{-4}
FNO 5.3×1075.3 \times 10^{-7} 5.1×1075.1 \times 10^{-7} 7.8×1067.8 \times 10^{-6}

3. Mathematical Properties, Uniqueness, and Degeneracies

The density-to-potential map is generically unique (modulo trivial constants) under regular conditions. For lattice models, the uniqueness holds except at nodes of the wavefunction or fully Pauli-limited sites (Coe et al., 2016). In the continuum, uniqueness is guaranteed under the Hohenberg-Kohn framework, provided the ground-state is nondegenerate and the unique continuation property is satisfied (Penz et al., 2022).

Degeneracy regions in finite systems correspond to convex hulls of algebraic varieties in density space. Non-uniquely vv-representable densities arise at boundaries or intersections of degeneracy regions, as elucidated via analytic geometry (e.g., Roman surfaces in K4_4 graphs). Such measure-zero sets accommodate multivaluedness of the density-to-potential inverse (Penz et al., 2022).

For classical dynamical DFT, uniqueness of the time-dependent density-to-potential mapping requires regularity, non-vanishing densities, and boundary conditions—no-flux or prescribed boundary currents—excluding diffusion equivalence (potential shifts constant in the support of ρ\rho) (Klatt et al., 2023).

4. Data-Driven and Numerical Inversion Methodologies

Neural operators, such as FNO and DeepONet, provide rapid, stable surrogates for density-to-potential inversion in cDFT and physical simulations. For hard-rod fluids, FNO enables sub-10310^{-3} errors in density recovery from potential profiles, with inference times ∼10 ms per sample (Pan et al., 7 Jun 2025). Hybrid approaches integrating Gaussian Process Regression (GPR) and active learning can push MSE further into the 10610^{-6}10510^{-5} regime for in-group and new-data test sets.

On quantum lattices (Hubbard-type models), practical iterative inversion schemes reconstruct the site-potential vector vv from target densities nitargetn_i^{\rm target} by leveraging occupation moments and fixed-point updates (Coe et al., 2016). In periodic electronic systems, convex analysis and Moreau–Yosida regularization yield Lipschitz-stable regularized inverses, facilitating robust numerical Hartree–Fock inversion for local Kohn-Sham potentials, with explicit error bounds proportional to input density perturbations (Bohle et al., 28 Oct 2025).

5. Physical Applications and Limitations

Density-to-potential mappings undergird experimental and simulation protocols for confined fluids, adsorption phenomena, charge transfer, and dynamical control in colloidal systems. Example applications include:

  • Confined fluids and adsorption: Inversion surrogates enable rapid recovery of confining potentials from observed density profiles, relevant to nanofluidics and supercapacitor modeling (Pan et al., 7 Jun 2025).
  • Colloidal random landscapes: Empirical inversion of mean density maps using linear-response relations and particle–particle correlations reconstructs underlying optical potentials in colloidal media (Bewerunge et al., 2016).
  • Galaxy dynamics: Reconstruction of gravitational potentials from phase-space density snapshots via deep normalizing flows and neural networks, bypassing parametric model fitting (Green et al., 2020).

Limitations arise in extrapolation to out-of-distribution potentials, regions of strong gradients (e.g., near hard walls), and in the presence of degeneracies where uniqueness fails. Robustness can be guaranteed via regularization and non-expansive convex analysis methods, but qualitative generalization may deteriorate for unrepresented confining features or multidimensional extension (Pan et al., 7 Jun 2025, Bohle et al., 28 Oct 2025, Penz et al., 2022).

6. Connections to Quantum Density Functional Theory and Generalizations

In quantum DFT, the density-to-potential map is a consequence of convex duality and the subdifferential structure of the universal energy functional—the Rayleigh–Ritz principle and Legendre–Fenchel transforms formalize this mapping (Penz et al., 2022). Extensions include:

  • Time-dependent quantum systems: Density-to-potential inversion underpins TDDFT via fixed-point contraction arguments and the Runge–Gross theorem for analytic potentials, with rigorous functional-analytic existence and uniqueness (often via Sturm–Liouville theory) (Ruggenthaler et al., 2014, Ruggenthaler et al., 2012).
  • Potential mapping with currents and fields: For current-DFT and magnetic field extensions, the mapping can fail (paramagnetic CDFT) except under regularization (Moreau–Yosida or Maxwell–Schrödinger DFT), which restores differentiability and uniqueness for generalized density-potential pairs (Penz et al., 2023).

Convex analysis and functional regularization form the modern mathematical backbone of rigorous mapping theorems and algorithmic inversion in both classical and quantum regimes.

7. Scaling Laws and Practical Surrogate Design

Scaling analysis reveals that neural operator architectures exhibit Chinchilla-type scaling for MSE in functional fitting, with irreducible errors set by model capacity and dataset size. FNO, in particular, features extrapolation limits exp(L)2.5×106\exp(L_\infty)\approx 2.5\times10^{-6} and scaling exponents α0.38\alpha\approx0.38 (dataset), β0.26\beta\approx0.26 (parameter count), outperforming standard DNN or kernel methods for density-to-potential surrogates in cDFT (Pan et al., 7 Jun 2025).

These advances position operator-based density-to-potential maps as both mathematically rigorous and computationally efficient tools for inversion, control, and functional emulation in many-body physical sciences.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Density-to-Potential Map.