Fractional Matrix Programs (FMP)
- Fractional Matrix Programs are optimization problems where objectives or constraints are ratios of matrix or scalar functions reflecting key metrics such as SINR and MSE.
- Advanced methods like Dinkelbach's method, quadratic transform, and MM algorithms enable efficient solutions even for complex nonconvex FMPs.
- FMPs have broad applications in communications, radar, machine learning, and resource allocation, providing rigorous convergence guarantees and performance improvements.
Fractional Matrix Programs (FMP) refer to a broad class of mathematical optimization problems in which the objective or constraint functions are constructed from ratios of matrix-valued or scalar-valued functionals, often involving Hermitian or positive semidefinite matrices. FMPs play a foundational role in control, communications, signal processing, and machine learning owing to their ability to encode essential metrics such as signal-to-interference-plus-noise ratio (SINR), energy efficiency, minimum mean-square error (MSE), and the Cramér-Rao bound (CRB) (Shen et al., 13 Mar 2025, Soleymani et al., 3 Feb 2025, Krishtal et al., 2023). Recent advances unify and generalize classical methods (Dinkelbach, minorization–maximization, quadratic transform) for efficient algorithmic solution of FMPs involving sums or products of multiple fractional functions, often admitting matrix arguments and constraints.
1. Formal Definitions and Problem Classes
An FMP typically considers an optimization variable (or a set of matrices ), aiming to extremize functions composed from multiple fractional functions (FFs):
- Scalar FF: , with , .
- Matrix-ratio FF: and are Hermitian, and objectives are traces of .
The general forms include:
- Minimization: , subject to .
- Maximization: , subject to . These may cover single or multiple ratios, sums or products of FFs, and support both scalar and matrix-valued numerators and denominators (Soleymani et al., 3 Feb 2025).
A canonical matrix-form FMP is: where , , and (Shen et al., 13 Mar 2025).
2. Core Algorithmic Paradigms
Several algorithmic traditions underpin FMP solution methods:
- Dinkelbach's Method: Classical approach for single-ratio, scalar-valued, concave/convex fractional programs. The method employs a root-finding scheme for and iteratively solves auxiliary programs until global optimality is established (Soleymani et al., 3 Feb 2025, Krishtal et al., 2023).
- Generalized Dinkelbach (GDA): Extends to multiple ratios (e.g., ), but typically admits only stationarity guarantees and involves a twin-loop algorithm, limiting practical scalability (Soleymani et al., 3 Feb 2025).
- Quadratic Transform (QT) and Shen–Yu Algorithm: Converts each fractional term into an equivalent biconvex or block-convex surrogate. For a sum of ratios, this produces an augmented objective: Alternating optimization over and auxiliary variables (e.g., ) yields monotonic ascent in objective value and converges to a stationary point (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
- Single-loop MM Algorithms: The recent framework (Soleymani et al., 3 Feb 2025) extends MM-type single-loop update strategies to arbitrary sums, products, and matrix-valued FFs. These methods generate surrogate majorants/minorants ensuring monotonic convergence to stationary points, requiring only mild regularity (continuity).
3. Theoretical Properties and Complexity
The theoretical guarantees are closely tied to the class of the FMP and chosen algorithmic approach:
- Stationary-point convergence: Alternating MM/QT approaches, under mild differentiability and convex-feasibility assumptions, converge to stationary points of the original FMP. For convex–convex quadratic scalar cases with single denominator, the Shen–Yu transform followed by global Dinkelbach checking ensures global maximization (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
- Global-optimality certification: By combining local improvement loops (Shen–Yu/QT) with outer global checks (Dinkelbach root-finding), global maximizers can be reliably identified, especially for low-rank quadratic forms (Krishtal et al., 2023).
- Computational Complexity:
- QT/MM per-iteration cost: Dominated by matrix inversions and a convex (or closed-form) primal update (Shen et al., 13 Mar 2025).
- Region-checking for low-rank quadratic fractional programs: At most regions if is rank , with polynomial complexity in when is small (Krishtal et al., 2023).
- Recent single-loop MM achieves at least the same per-iteration complexity as GDA, with simpler coordination and, empirically, faster convergence (Soleymani et al., 3 Feb 2025).
4. Illustrative Applications
FMPs represent key structures across multiple domains:
- Communications: Beamforming design, SINR maximization, latency/minimum delay with finite blocklength, energy efficiency (EE) maximization, and spectral–energy tradeoff in multi-user MIMO—often formulated as sums or products of ratios, possibly incorporating per-user or aggregate power constraints (Soleymani et al., 3 Feb 2025, Shen et al., 13 Mar 2025).
- Radar and Sensing: CRB minimization under power constraints; the CRB is a trace-inverse of a Fisher information matrix, directly fitting FMP forms amenable to QT-based solution (Shen et al., 13 Mar 2025).
- Machine Learning and Clustering: Normalized cut in graph clustering, SVM margin maximization, and portfolio optimization data, often written as ratio-type objectives or constraints (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
- Resource Allocation: Optimization under RIS-aided MU-MIMO scenarios and FBL coding constraints, where metrics such as sum-delay or geometric-mean EE are explicitly FMPs and handled by MM-based single-loop surrogates (Soleymani et al., 3 Feb 2025).
5. Comparative Strengths and Limitations
| Method | Handles Multi-Ratio? | Matrix FFs? | Global Optimum? |
|---|---|---|---|
| Dinkelbach | No (single ratio) | No | Yes (scalar, convex-concave) |
| Generalized Dinkelbach | Some (twin-loop) | No | Stationary point |
| Quadratic Transform / MM | Yes | Yes | Stationary point (global with Dinkelbach) |
| QT+Region/Sign Decomp. | Yes (low-rank) | Some | Yes, if full region checked |
A notable limitation of classical approaches such as Dinkelbach and GDA is their inability to efficiently handle sums or products of multiple FFs, or matrix-valued FFs, especially with nonconvex numerators or denominators. The recent MM-based frameworks and QT surrogates surmount these limitations, generalizing to more complex FMPs with a single-loop structure and broader convergence guarantees (Soleymani et al., 3 Feb 2025, Shen et al., 13 Mar 2025).
6. Extensions, Generalizations, and Open Directions
Recent FMP research has generalized foundational algorithms to tackle:
- Composite FMPs: Sums, products, or minimums over scalar/matrix FFs in objectives or constraints, even with nonconvexities (Soleymani et al., 3 Feb 2025).
- Mixed packing and covering constraints: PTAS via multiplicative-weights, Lyapunov function approaches, and coupling of exponential potentials for primal–dual feasibility (0801.1987).
- Dynamic/sequential FMPs: Warm-initiating variables from previous solutions for time-varying problems (0801.1987).
- Alternating optimization for hybrid variable sets, e.g., beamformers and RIS phase shifts, leveraging block-MM for each set (Soleymani et al., 3 Feb 2025).
A plausible implication is ongoing research into extending global theory (beyond low-rank quadratic and sign-region enumeration) for general high-dimensional, nonconvex FMPs, especially in large-scale communications and ML systems.
References:
- "Quadratic Transform for Fractional Programming in Signal Processing and Machine Learning" (Shen et al., 13 Mar 2025)
- "A Framework for Fractional Matrix Programming Problems with Applications in FBL MU-MIMO" (Soleymani et al., 3 Feb 2025)
- "On Low-Rank Convex-Convex Quadratic Fractional Programming" (Krishtal et al., 2023)
- "A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs" (0801.1987)