Inverse Identification of Potentials
- Inverse identification of potential functions is a framework for recovering hidden potentials from indirect measurements in various physical and mathematical systems.
- It leverages diverse methods—including spectral analysis, optimization, and control theory—with tailored regularization to ensure unique and stable reconstructions.
- Applications span quantum mechanics, stochastic diffusion, and optimal transport, demonstrating the approach’s versatility and practical significance.
Inverse identification of potential functions refers to the class of mathematical and computational problems where one seeks to recover an underlying potential function from indirect observations associated with physical, stochastic, optimization, or control processes. These inverse problems arise in quantum mechanics, diffusion, optimal transport, aggregation dynamics, spectral theory, and variational calculus—each with distinct identifiability challenges, data requirements, and solution methodologies.
1. Formulations of Inverse Potential Problems
Inverse potential identification is encapsulated in various operator and system formulations. In quantum evolution, the canonical setting considers a time-independent Schrödinger equation , with the inverse problem posed via the initial-to-final-state map on (Cañizares et al., 8 Mar 2025). In stochastic diffusion, the model is a stochastic heat equation with a multiplicative potential and noise, , and the potential must be reconstructed from expected log-observables at fixed spatial points (Feng et al., 2023). In optimal transport, the cost function (potential in the transport dynamics) must be retrieved from optimal transport values and marginal data (González-Sanz et al., 2023). In control and dynamical systems, both local (classical) and non-local potential identification problems are cast via operator or spectral data and variational criteria (Pauwels et al., 2014, Liu et al., 17 Feb 2025, Vabishchevich, 6 Oct 2025, Alhaidari et al., 2020, Khosravi et al., 2020).
The commonality is indirect observation: rather than direct measurement of the potential, one accesses transformation maps, output statistics, boundary values, spectral quantities, or extremal actions determined by the potential.
2. Uniqueness and Identifiability Criteria
Uniqueness guarantees relate functional data—either operator maps, observables, or cost values—to recovery of the underlying potential. In quantum systems with the Schrödinger operator, full knowledge of the dynamical map , in dimension , suffices to uniquely identify any bounded, integrable, super-linearly decaying potential , i.e., , , (Cañizares et al., 8 Mar 2025). This result exploits stationary wave expansions and circumvents the complex geometrical optics (CGO) machinery required for time-dependent potentials.
For stochastic diffusion equations with multiplicative white noise, strict uniqueness is proven for nonnegative, square-integrable potentials from the time trace of expected log-amplitudes at a single spatial point, with an explicit reconstruction formula (Feng et al., 2023).
In inverse optimal transport under convex or concave cost structures, the transport cost is determined (up to additive constants) from the union of the ranges of gradients of the optimal Kantorovich potentials (González-Sanz et al., 2023). In univariate settings, statistical completeness (injectivity of certain transforms) ties the existence of unique cost recovery to properties of the marginal families.
In control-theoretic settings, the inverse optimal control problem utilizes Hamilton-Jacobi-Bellman optimality conditions and sum-of-squares (SOS) techniques to promote uniqueness or sparsity up to a positive scale (Pauwels et al., 2014). For non-local operator perturbations, four distinct spectral measurements are mathematically sufficient for full recovery of both amplitude and kernel (Liu et al., 17 Feb 2025).
Non-uniqueness is endemic when the data does not sufficiently constrain the solution, as in inverse quasiconvexification problems, where a rich family of non-convex energies may produce the same relaxed quasiconvex integrand (Pedregal, 2019).
3. Analytic and Computational Techniques
A spectrum of analytic tools is deployed according to the operator and data landscape:
- Stationary wave construction and Herglotz wave analysis for quantum operators (Cañizares et al., 8 Mar 2025).
- Orthogonality identities and high-energy spectral expansion: reducing map equivalence to vanishing Fourier transforms of potential differences.
- Spectral analysis for non-local operators: partial fraction expansions, characteristic polynomials, and product representations to relate perturbations to spectral shifts (Liu et al., 17 Feb 2025).
- Regularized differentiation and frequency-domain filtering: Tikhonov and spectral cut-off methods for numerically stable inversion formulas in stochastic PDE contexts (Feng et al., 2023).
- Finite dimensional convex and nonparametric optimization: Quadratic Programming (QP) formulations with convexity constraints, enabling scalable recovery of non-convex potential functions as differences of convex interpolants (Khosravi et al., 2020).
- Polynomial optimization and Linear Matrix Inequalities: Hamilton-Jacobi-Bellman-based sum-of-squares conditions for the inverse control and Lagrangian identification problems (Pauwels et al., 2014).
- Split Bregman methods: Handling nonlocal potential identification in aggregation equations via -regularized variational problems with robust denoising techniques (He et al., 2022).
- Nonnegative Least Squares (NNLS): Used to fit boundary layer potentials in classical inverse potential theory for source domain localization, leveraging a priori nonnegativity to stabilize the inversion (Vabishchevich, 6 Oct 2025).
- Orthogonal polynomial and spectral matching: Construction of potentials matching specified spectral sequences via analysis of Jacobi, Wilson, and other orthogonal polynomial families, matrix element reconstruction, and analytic matching (Alhaidari et al., 2020).
4. Data Types and Regularization Strategies
Choice of data profoundly influences identifiability and algorithmic stability:
- Full operator data (initial-to-final maps or spectral sequences) offers the strongest uniqueness results but is often inaccessible.
- Boundary measurements and potential values require regularization (Tikhonov, spectral cut-off, total variation, -norm) to mitigate ill-posedness and measurement noise.
- Optimality certificates and observed trajectories are encoded as constraints in control and gradient flow identification, further regularized by convexity, sparsity, and normalization penalties.
- Agent-based and time-varying data are processed via kernel density estimation, time segmentation, and adaptive regularization in nonlocal inverse problems.
- Partial potential values and marginals (optimal transport, univariate inversion, and variational calculus) connect nonlinear problem structure to linear integral transforms and completeness conditions.
Regularization parameters and smoothing strategies are tuned according to data modality and expected signal structure, with substantial impact on numerical performance and recovery error.
5. Applications Across Domains
Inverse identification of potential functions informs multifarious applications:
- Quantum mechanics: Hamiltonian determination for backward prediction and quantum control, with implications for uniqueness in high dimensions and practical avoidance of CGO requirements (Cañizares et al., 8 Mar 2025).
- Stochastic physics: Recovery of time-dependent forcing in random diffusion, with explicit uniqueness and pathway to extensions in multidimensional, spatially inhomogeneous, and non-Gaussian noise environments (Feng et al., 2023).
- Optimal transport and economics: Direct identification of underlying transportation or matching cost structures from observed marginal and plan statistics (González-Sanz et al., 2023).
- Gradient flows and system identification: Nonparametric reconstruction of vector field potentials in data-driven dynamical systems, with convexity structure mapped to efficient QP solvers (Khosravi et al., 2020).
- Control and robotics: Inferring cost functionals (Lagrangians) governing observed optimal trajectories for reverse engineering or imitation learning, quantifying identifiability via polynomial optimization and SDP (Pauwels et al., 2014).
- Classical potential theory: Source domain localization for geophysical, medical imaging, and inverse field mapping, robustly implemented via NNLS and boundary layer approximations (Vabishchevich, 6 Oct 2025).
- Computational spectral theory: Construction of families of potentials sharing identical spectra, extending the catalogue of exactly-solvable models (Alhaidari et al., 2020).
- Aggregation in biology and social dynamics: Identification of interaction kernels from population density evolution, robust to noise and under minimal measurement regimes (He et al., 2022).
- Calculus of variations and material science: Inverse quasiconvexification as a fundamental design principle for variational integrands in relaxed optimization, with direct consequences for conductivity imaging and microstructure modeling (Pedregal, 2019).
6. Limitations, Controversies, and Extensions
Limitations are generally problem- and data-specific:
- Ill-posedness: Non-uniqueness or instability can arise when data is insufficient or the operator is not injective, especially without a priori constraints (e.g., nonnegativity, support bounds).
- Regularity assumptions: Many uniqueness results rely on integrability, boundedness, or decay (super-linear, exponential) that may not be optimal or physically realistic.
- Computational tractability: High-dimensional QP or SDP formulations scale cubically with data points, whereas highly detailed inversion of nonlocal or spectral operators may be limited by memory or discretization.
- Extension to inhomogeneous or time-dependent potentials: Full characterization remains unavailable for general manifolds, colored noise, or spatially-varying coefficients.
- Inverse quasiconvexification: Infinite degrees of freedom in constructing non-convex pre-relaxations with identical relaxations; complete characterization of all admissible potentials remains unresolved, and connection to stability of PDE-constrained inverse problems is subject to ongoing research (Pedregal, 2019).
Promising extension directions include relaxing regularity for Hamiltonian identification, generalizing stationary wave constructions beyond Euclidean space, integrating agent-based and time-varying data modalities, and iteratively layering a priori constraints with domain-search algorithms for source localization.
7. Summary Table: Domains and Key Techniques
| Domain | Data Type | Key Analytic/Algorithmic Techniques |
|---|---|---|
| Schrödinger IHFP (Cañizares et al., 8 Mar 2025) | Full evolution map | Stationary waves, Herglotz, Fourier analysis |
| Stochastic Diffusion (Feng et al., 2023) | Log-mean trace | Explicit inversion, Tikhonov, cut-off |
| Optimal Transport (González-Sanz et al., 2023) | Marginals, OT values | Gradients of potentials, completeness |
| Control (HJB) (Pauwels et al., 2014) | Trajectory database | SDP, SOS, regularization |
| Nonlocal Operator (Liu et al., 17 Feb 2025) | 4 spectra | Partial fraction, spectral reconstruction |
| Potential Theory (Vabishchevich, 6 Oct 2025) | Boundary values | NNLS, domain search, single-layer approx. |
| Aggregation PDE (He et al., 2022) | Density evolution | TV, Laplacian regularization, split-Bregman |
| Gradient Flows (Khosravi et al., 2020) | States, derivatives | QP, convex/DC decomposition, subgradients |
| Quasiconvexification (Pedregal, 2019) | Relaxed functional | Coincidence sets, Young measures, laminates |
| Spectral Matching (Alhaidari et al., 2020) | Spectral formula | Polynomial recursion, basis reconstruction |
Inverse identification of potential functions constitutes a mathematically rigorous foundation for theory and computation in nonlinear dynamics, quantum and stochastic systems, optimal transport, and classical field theory. Problem structure, data modality, and regularization are pivotal to identifiability and practical inversion, with advanced analytic and optimization techniques providing the backbone for solution and extension.