MINLP: Mixed-Integer Nonlinear Programming
- MINLP is a mathematical optimization framework combining integer and continuous variables with nonlinear functions to tackle complex engineering challenges.
- The outer approximation method transforms convex MINLPs into MILP master problems using KKT-compliant subgradients for reliable convergence.
- The framework is widely applied in energy, logistics, and design, leveraging finite convergence guarantees even with nondifferentiable data.
A mixed-integer nonlinear programming (MINLP) model is a mathematical optimization framework that incorporates both continuous and discrete (integer or binary) decision variables, combined with nonlinear (possibly non-differentiable) objective and constraint functions. The central challenge in MINLP is to simultaneously manage the combinatorial complexity induced by integer variables and the analytical intricacies of nonlinear functions, especially when differentiability cannot be assumed. As a result, MINLP plays a pivotal role in modeling and solving high-fidelity optimization problems in engineering, logistics, energy systems, and design domains.
1. Problem Structure and Types of MINLP
The standard convex MINLP problem, as addressed in the literature, is written as
where are continuous variables constrained to a nonempty compact convex set , are discrete (often integer or binary) variables, is a convex (but potentially nondifferentiable) objective function, and are convex (possibly nondifferentiable) constraint functions (Wei et al., 2015).
A major distinction is drawn between convex MINLPs (where all nonlinearities are convex and constraints are convex in the continuous variables for fixed integer values) and nonconvex MINLPs (with potentially nonconvex nonlinearities or constraints). Convex MINLPs allow for rigorous reformulations and convergence guarantees, while nonconvex cases are generally NP-hard and require global optimization algorithms, often without finite convergence or optimality certificates.
2. Outer Approximation and MILP Reformulation
The outer approximation method is a foundational strategy for convex MINLP. The method iteratively constructs a mixed-integer linear programming (MILP) master problem by linearizing the nonlinear functions at selected points in the feasible region. The key technical advancement is the use of subgradients in place of gradients when the nonlinear data are not differentiable.
For a feasible assignment , the nonlinear subproblem
is solved to optimality. If is the solution and the Slater constraint qualification holds, the Karush–Kuhn–Tucker (KKT) conditions provide subgradients and for each active constraint , which are then used to build outer linearizations: Arbitrary subgradients do not suffice; only those satisfying the KKT conditions of the subproblem at yield valid linearizations that preserve the original problem's optimality properties (see Example 3.1 in (Wei et al., 2015)).
The master MILP accumulates these cuts over a finite number of subproblem solutions:
3. Iterative Algorithmic Framework
The algorithm proceeds in a finite loop as follows:
- For a candidate , solve the subproblem . If feasible, derive an optimal solution and extract KKT-compliant subgradients to form linear cuts.
- If is infeasible, solve an auxiliary feasibility subproblem to construct cuts that exclude .
- Add new cuts to the master MILP (representing both optimality and infeasibility information).
- Solve the updated master MILP to obtain a new candidate assignment .
- Terminate when the master MILP is infeasible (certifying that the original MINLP is infeasible) or when the cost matches the subproblem solution (certifying optimality).
The convergence of the algorithm is guaranteed under the convexity and Slater assumptions, and the finiteness of [(Wei et al., 2015), Theorem 3.6].
4. Role of Subgradients and Theoretical Generalization
When dealing with nonsmooth but convex data, subgradient-based linearizations are the only valid approach to ensure equivalence to the original MINLP in the outer approximation. While gradients can be used in differentiable cases, nondifferentiable settings require careful selection of subgradients that specifically arise from KKT conditions at subproblem solutions to maintain theoretical guarantees. The work generalizes and strictly extends the classical outer approximation frameworks to convex MINLPs with nondifferentiable data, substantially broadening the class of tractable real-world problems (Wei et al., 2015).
5. Convergence Guarantees and Theoretical Properties
The key convergence result states that, provided the convexity and regularity conditions (notably, the Slater condition for all feasible subproblems) are met and the discrete variable set is finite, the algorithm cannot loop endlessly. Either an optimal solution is found (with the outer-approximation master MILP matching a feasible subproblem), or the relaxation is tightened to the point of infeasibility—implying the original MINLP has no solution. KKT-compliant subgradient selection is central to this guarantee.
The overall procedure generalizes the differentiable-case literature (e.g., [5, 8, 23] in the referenced work), and explicit counterexamples are provided to demonstrate failure of the method with arbitrary subgradients (Wei et al., 2015).
6. Practical Implications and Applications
Implementing the framework enables practitioners to solve convex, nondifferentiable MINLP problems of industrial and engineering relevance by reformulating them into MILPs compatible with state-of-the-art solvers. The approach removes the reliance on continuous differentiability, thus supporting a broader array of models involving, for instance, piecewise linear costs, submodular penalties, or other nonsmooth convex formulations. The finite convergence, rigorous outer-approximation, and MILP reformulation underpin applications in process systems engineering, energy optimization, communication networks, and complex design, where nonsmooth convexity is prevalent (Wei et al., 2015).
7. Limitations and Extensions
The methodology is restricted to convex MINLPs; nonconvex or indefinite constraints lie outside the convergence guarantees and may require global optimization algorithms based on branch-and-bound or domain partitioning. The algorithm's success is contingent upon proper enforcement of the Slater condition and the ability to efficiently solve both nonlinear convex subproblems and large master MILPs. The extension to infinite or high-cardinality discrete sets is nontrivial and remains a subject for further research. Nonetheless, the proposed algorithm rigorously generalizes outer-approximation techniques by enabling their use in key practical models afflicted by nondifferentiability and mixed-integer structure.