Entropy-Admissible Solutions: Theory & Applications
- Entropy-admissible solutions are mathematically rigorous constructs that enforce physical selection criteria using tailored entropy inequalities, variational principles, or dissipation conditions.
- They guarantee well-posedness by ensuring uniqueness and stability in scalar conservation laws, nonlocal models, and quantum estimation through convexity and information divergence frameworks.
- Numerical schemes and machine learning applications leverage entropy-admissibility to maintain stability and convergence, while highlighting challenges in multidimensional nonlinear systems.
An entropy-admissible solution is a mathematically rigorous construct that singles out physically meaningful solutions among all weak or generalized solutions to a broad range of optimization, estimation, and dynamical problems. Entropy-admissibility enforces selection criteria grounded in convexity, dissipation, irreversibility, or information divergence, depending on the context. Across partial differential equations, inverse and optimization problems, stochastic processes, ergodic theory, and quantum estimation, entropy-admissibility is formalized via tailored entropy inequalities, variational principles with entropy terms, or admissibility classes defined by entropy-related constraints. These frameworks ensure well-posedness (existence, uniqueness, stability) and often correspond to solutions that optimize a thermodynamic or information-theoretic functional under prescribed constraints.
1. Scalar Conservation Laws: Classical and Discontinuous Flux
For scalar conservation laws, entropy-admissible solutions are weak solutions that additionally satisfy entropy inequalities which rule out non-physical behaviors, such as inadmissible shocks or oscillatory artifacts. In the classical case with continuous flux, the Kružkov entropy condition requires that, for any convex entropy-entropy flux pair (e.g., , $q(u)=\sgn(u-k)(f(u)-f(k))$), the following holds in distributions:
for every constant . Entropy-admissibility thus selects the physically relevant solution—ensuring uniqueness and contraction—out of the infinitely many weak solutions that may solve the equation (Bressan et al., 2020).
When the flux is discontinuous (e.g., for , for ), classical entropy conditions are insufficient. Mitrović introduced the –entropy admissibility framework: two strictly increasing maps reparametrize the solution across the interface, and entropy conditions are imposed separately for each region, with an explicit entropy-dissipation defect at the interface. The generalized entropy admissibility condition becomes
$\begin{array}{l} \partial_t\left\{\sgn(v-\xi)\left[H(x)(\alpha(v)-\alpha(\xi)) + H(-x)(\beta(v)-\beta(\xi))\right]\right\} \ + \partial_x\left\{\sgn(v-\xi)[H(x)(f\circ\alpha(v)-f\circ\alpha(\xi))+H(-x)(g\circ\beta(v)-g\circ\beta(\xi))]\right\} \ \qquad - |f\circ\alpha(\xi)-g\circ\beta(\xi)|\,\delta(x) \le 0, \end{array}$
for all constants (Mitrovic, 2010). This framework ensures existence and uniqueness without requiring convexity or genuine nonlinearity and recovers all previously known admissibility criteria as special cases.
2. Entropy Admissibility in Nonlocal, Nonhomogeneous, and Numerical Contexts
Entropy-admissibility extends robustly to nonlocal conservation laws and inhomogeneous models. In nonlocal traffic-flow models with convolution-based flux, it has been shown that as the nonlocality (kernel parameter ) vanishes, the solutions converge in to the unique entropy-admissible solution of the corresponding local equation (Bressan et al., 2020). The argument employs uniform bounded variation (BV) bounds, compactness (via Helly's theorem), and convex entropy pairs. The entropy inequality is preserved in the passage to the limit, ensuring uniqueness even in highly nonlocal, nonlinear situations.
For conservation laws with source terms (e.g., manufacturing systems with yield-loss), entropy-admissible solutions are characterized by satisfying source-modified Kružkov-type entropy inequalities. Well-designed splitting schemes and monotone finite-volume discretizations are demonstrated to generate numerical solutions converging to the unique entropy-admissible limit. The entropy admissibility is crucial for discrete total variation boundedness, precluding spurious oscillations and ensuring physical relevance (Sarkar, 2013).
3. Entropy Admissibility in Quantum Estimation and Statistical Inference
Entropy-admissibility appears naturally as a solution paradigm in inverse and estimation problems involving convex optimization of relative entropic functionals. In quantum state tomography, the minimum relative entropy principle seeks the density operator that minimizes quantum relative entropy to a prior state , subject to prescribed measurement constraints:
The entropy-admissible solution is the unique minimizer—when feasible—which exactly matches the measurements and is closest to the prior in Kullback–Leibler divergence. Feasibility is efficiently certified (via convex eigenvalue minimization); if needed, constraints are isotropically relaxed to restore admissibility. Dual variational analysis, global strict convexity, and continuous dependence on data ensure uniqueness, computability, and physicality of the solution (Zorzi et al., 2013).
In robust utility maximization, entropy-admissibility defines the optimal class of investment strategies—namely, those whose terminal wealth is a supermartingale under all local martingale measures with finite generalized entropy to the model set. This class ensures the existence of a strategy that attains the robust utility optimum under model uncertainty and captures adversarial risk (Owari, 2011).
4. Entropy-Admissibility in Ergodic Transport and Statistical Mechanics
In ergodic optimal transport and statistical mechanics, entropy-admissible solutions address the selection of equilibrium plans among the space of stochastic couplings. Given a cost function and a set of joint plans (with prescribed marginals and invariance), the entropy-admissible equilibrium plan maximizes
where is the plan entropy. The dual problem is to minimize a sum of penalized potential functions over all admissible pairs:
The entropy-admissible solution saturates the admissibility inequality on the support of the plan and recovers, in the zero-temperature limit, the classical optimal transport solution. The entropic regularization encodes both the maximum-likelihood stochastic coupling and the statistical mechanical equilibrium (Gibbs plan) (Lopes et al., 2013).
5. Entropy Admissibility in Nonlocal, Nonlinear, and Convex-Integration Regimes
Entropy-admissible solutions extend to nonlocal PDEs and highly underdetermined regimes. For the macroscopic incompressible porous media (IPM) equation, entropy admissibility is imposed via maximal potential energy dissipation among all possible convex-integration subsolutions. The corresponding nonlocal conservation law admits solutions that satisfy all convex Kružkov-type entropy inequalities. Existence is established by analytic fixed-point methods for initial data with analytic interface (Castro et al., 2023). Maximal-dissipation solutions align both with the relaxation via convex integration and Otto's gradient-flow (JKO) scheme.
However, for multidimensional hyperbolic systems such as the 2D Euler equations, the entropy condition alone does not suffice for uniqueness. Convex integration can construct infinitely many bounded entropy-admissible weak solutions corresponding to the same Riemann initial data, even when the classical 1D theory is unique. This demonstrates the limitations of entropy admissibility for well-posedness in higher dimensions and the necessity of further constraints (e.g., viscosity, BV regularity, or additional physical admissibility criteria) (Baba et al., 2018).
6. Entropy Admissibility in Optimization, Control, and Machine Learning
In combinatorial optimization and machine learning, entropy-admissibility appears in the form of constraints ensuring monotonicity, irreversibility, or no-overestimation. In heuristic search (A*), an admissible heuristic never overestimates the actual cost-to-go: for all . Cross-Entropy Admissibility (CEA) is a loss function enforcing this property during neural-network training. CEA reallocates probability mass only among admissible estimates and penalizes overestimation. Under CEA, learned heuristics achieve near-zero overestimation rates and attain both sample-space generalization and compression advantages over classical pattern databases, with theoretical sample-complexity bounds scaling in network size rather than in state-space cardinality (Futuhi et al., 26 Sep 2025).
| Context | Entropy-Admissibility Mechanism | References |
|---|---|---|
| Scalar conservation laws | Kružkov entropy inequalities, interface dissipation | (Bressan et al., 2020, Mitrovic, 2010) |
| Quantum estimation | Minimum quantum relative entropy under constraints | (Zorzi et al., 2013) |
| Ergodic transport | Plan maximizing , dual admissibility | (Lopes et al., 2013) |
| Robust utility maximization | Supermartingale property under finite-entropy measures | (Owari, 2011) |
| Nonlocal PDEs/convex integration | Maximum dissipation, Kružkov-type entropy | (Castro et al., 2023) |
| Machine learning/control | CEA loss enforcing | (Futuhi et al., 26 Sep 2025) |
7. Limitations, Extensions, and Open Problems
Entropy-admissibility provides a versatile, fundamentally rigorous selection mechanism in diverse mathematical and applied contexts. In scalar and local problems, it secures uniqueness and stability. In quantum and statistical estimation, it yields computable, unique, and interpretable solutions. For nonlocal and multi-agent regimes, maximal entropy-dissipation aligns dissipation, irreversibility, and convex duality.
However, in multidimensional nonlinear systems (notably, compressible or incompressible Euler), entropy admissibility alone fails to guarantee uniqueness—demonstrated by explicit convex-integration examples. This underscores the necessity for enhanced criteria in higher-dimensional PDEs, potentially through vanishing-viscosity limits, additional structural constraints, or alternate mechanisms.
The connection between entropy-admissible solutions and computational schemes is robust—convergent numerical and machine learning methods exploiting discrete analogues of entropy inequalities achieve guaranteed admissibility and generalization. Open challenges remain in extending these frameworks to rougher data, higher dimensions, nonconvex settings, and adaptive or data-driven entropy selection in stochastic, control, and learning systems.