Papers
Topics
Authors
Recent
Search
2000 character limit reached

Log-Polynomial Optimization Methods

Updated 14 January 2026
  • Log-polynomial optimization is the study of extremal problems where objectives are composed of logarithms of polynomials, crucial in maximum likelihood and cross-entropy models.
  • It offers a unifying framework for nonconvex, high-degree, and semialgebraic constraints, leveraging canonical duality and moment-SOS relaxations.
  • Recent advances include integrating KKT-type conditions and flat-extension criteria to boost computational efficiency and guarantee global optimality.

Log-polynomial optimization is the study and solution of extremal problems in which the objective function is composed of logarithms of polynomials, or more generally, sums of log-polynomial forms. These models arise in maximum likelihood, cross-entropy, Kullback-Leibler divergence, and smooth approximations of minimax-type loss functions. Log-polynomial optimization provides a unifying mathematical framework that encompasses nonconvex, high-degree, and semialgebraic constraints, and supports both analytical and algorithmic approaches grounded in canonical duality, convex optimization, and hierarchies of moment-sum-of-squares (SOS) relaxations (Choi et al., 6 Jan 2026, Chen et al., 2013, Alonso-Gutiérrez et al., 2020).

1. Mathematical Formulation and Classes of Problems

A canonical log-polynomial optimization problem has the structure: minxK  f(x)=i=1mailogpi(x)\min_{x \in K} \; f(x) = \sum_{i=1}^m a_i \log p_i(x) with ai>0a_i>0 and pi(x)p_i(x) real polynomials, over a semialgebraic set

K={xRn:h(x)=0  (E),  gj(x)0  (jI)}K = \{ x \in \mathbb{R}^n : h_\ell(x) = 0 \; (\ell\in E), \; g_j(x) \geq 0 \; (j\in I) \}

The compactness and positivity of pip_i on KK are essential for well-defined objectives. Key special cases include quartic + log-sum-exp minimization,

minxRnP4(x)+1βlog(1+i=1peβ(qi(x)))\min_{x \in \mathbb{R}^n} P_4(x) + \frac{1}{\beta}\log\left( 1 + \sum_{i=1}^p e^{\beta(q_i(x))} \right)

with P4(x)P_4(x) a quartic polynomial and qi(x)q_i(x) quadratic forms (Chen et al., 2013), as well as statistical estimators and smooth convex relaxations of hard combinatorial objectives (Choi et al., 6 Jan 2026).

2. Canonical Duality and Triality Theory

Nonconvex log-polynomial problems of quartic and log-sum-exp type admit reformulation via canonical duality-triality principles (Chen et al., 2013). The canonical transformation decomposes the primal function into convex generating functions and applies the Legendre-Fenchel conjugate, yielding a dual problem: (Pd)max(τ,σ)Sd  Πd(τ,σ)(\mathcal{P}^d)\quad \max_{(\tau,\sigma)\in S^d}\;\Pi^d(\tau,\sigma) where SdS^d encodes positivity and interiority constraints and Πd(τ,σ)\Pi^d(\tau,\sigma) aggregates conjugate terms. The triality theorem partitions dual feasible points into regions corresponding to global, local, and saddle solutions of the primal problem. Analytical minimizers and extremal values for both global and largest local extrema are given by

x=G(τˉ,σˉ)1fx^* = G(\bar\tau,\bar\sigma)^{-1}f

with GG a generalized Hessian built from the dual variables. The existence of a maximizer in the positive-definite region S+S^+ classifies instances as “easy”; violation of the existence condition leads to “hard” cases requiring local or perturbative approaches. Explicit criteria for this phase transition are derived for quartic and minimax subproblems (Chen et al., 2013).

3. Moment-SOS Relaxation Hierarchy

General log-polynomial objectives over semialgebraic sets are tractable via a hierarchy of convex moment-SOS relaxations (Choi et al., 6 Jan 2026). For relaxation order dd, the original problem is mapped to a moment-based SDP: fmom,d=minilogpi,y    s.t.    y0=1,Md[y]0,Lgj(d)[y]0,Lh(d)[y]=0f_{mom,d} = \min \sum_i \log\langle p_i, y \rangle \;\; \text{s.t.}\;\; y_0=1,\, M_d[y] \succeq 0,\, L_{g_j}^{(d)}[y] \succeq 0,\, L_{h_\ell}^{(d)}[y]=0 where MdM_d is the moment matrix and LgL_g the localizing matrix encoding constraints. This yields a convergent sequence fmom,dff_{mom,d} \rightarrow f^*, the global minimum, under Archimedean positivity and Putinar’s Positivstellensatz.

Finite convergence is detected by the flat-extension rank condition: rankMt[y]=rankMtδ[y]\text{rank}\, M_t[y^*] = \text{rank}\, M_{t-\delta}[y^*] which enables extraction of the atomic representing measure and global optimizers. Extraction involves constructing multiplication matrices and joint eigen-decompositions to recover optimizer coordinates and weights.

4. Optimality Conditions and KKT-Type Enhancements

Log-polynomial optimization relaxations can be systematically strengthened by inclusion of KKT-type optimality conditions using polynomial Lagrange multiplier expressions (LMEs) (Choi et al., 6 Jan 2026). For constraint sets described by gjg_j and hh_\ell, explicit rational/ polynomial functions τj(x)\tau_j(x) substitute multipliers, yielding polynomial equations and inequalities appended to the moment-SOS relaxation: flme,d1=minilogpi,y s.t. moment-SDP and KKT-localizing constraintsf_{lme,d_1} = \min \sum_i \log \langle p_i, y \rangle \text{ s.t. } \text{moment-SDP and KKT-localizing constraints} In practical terms, such LME-augmented relaxations often converge at lower orders and boost numerical performance. The moment-SOS and KKT/contact-point approaches have also been employed in best approximation problems for log-concave functions, with uniqueness and existence certified by contact-point moment characterizations (Alonso-Gutiérrez et al., 2020).

5. Analytical and Geometric Characterization: Contact Points

In paralleling Löwner ellipsoids and Lasserre’s generalizations (Alonso-Gutiérrez et al., 2020), best approximation of compact sets or log-concave functions by “log-polynomials” entails minimizing the measure of polynomial sublevel sets containing the data or function support: g0=argmin{G1(g):KG1(g),gHn,d}g_0 = \text{argmin}\, \{ |G_1(g)| : K \subset G_1(g),\, g \in H_{n,d} \} Uniqueness of optimizers is linked to the existence of a finite set of “contact points” at which equality constraints f(xi)=t2eg2(xi)f(x_i)=t_2 e^{-g_2(x_i)} hold, along with moment-matching conditions. The injectivity and positive-definite Jacobian of the moment map facilitate robust computational solution via convex programming or interior-point/SQP solvers.

6. Illustrative Applications and Numerical Performance

Log-polynomial optimization is prominent in maximum likelihood estimation (with polynomial-parametrized probability mass functions), cross-entropy and KL divergence minimization in classification, and product-of-polynomial maximization. For moderate nn and mm, moment-SOS (and KKT-enriched) relaxations result in solution times of the order of one second and accurate extraction of global minimizers, as demonstrated via MATLAB/YALMIP+MOSEK (Choi et al., 6 Jan 2026). A summary of typical timing and rank results is given below.

Problem Type n m d Order Time (s) Rank
Discrete MLE 4 ~10 2 2 0.8 1
Cross-entropy 5 6 2 2 1.2 2
High-degree product 5 2 2 2 1.0 1

A plausible implication is that exploitation of structural sparsity and extension to non-polynomial activations are promising directions for research.

7. Computational and Theoretical Perspectives

Globally convex relaxations and contact-point characterizations yield efficient algorithms for problem classes with strict convexity. Nonconvex cases addressed by canonical duality and triality theory allow closed-form and analytical parameterization of solutions, subject to existence conditions. Computational implementations require moment-SOS hierarchies, flat-extension criteria, and polynomial multiplier constructions. The combination of convex programming, spectral extraction, and dual-triality analysis provides a comprehensive arsenal for global optimization of log-polynomial models across statistics, engineering, and machine learning contexts.

Future directions include exploiting sparsity patterns in moment-SOS relaxations, algorithmic treatment of dynamic constraint activation, and extension to non-polynomial log-activations, as highlighted in (Choi et al., 6 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Log-Polynomial Optimization.