Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fractional Matrix Programs (FMP)

Updated 30 March 2026
  • Fractional Matrix Programs are optimization problems where objectives or constraints are ratios of matrix or scalar functions reflecting key metrics such as SINR and MSE.
  • Advanced methods like Dinkelbach's method, quadratic transform, and MM algorithms enable efficient solutions even for complex nonconvex FMPs.
  • FMPs have broad applications in communications, radar, machine learning, and resource allocation, providing rigorous convergence guarantees and performance improvements.

Fractional Matrix Programs (FMP) refer to a broad class of mathematical optimization problems in which the objective or constraint functions are constructed from ratios of matrix-valued or scalar-valued functionals, often involving Hermitian or positive semidefinite matrices. FMPs play a foundational role in control, communications, signal processing, and machine learning owing to their ability to encode essential metrics such as signal-to-interference-plus-noise ratio (SINR), energy efficiency, minimum mean-square error (MSE), and the Cramér-Rao bound (CRB) (Shen et al., 13 Mar 2025, Soleymani et al., 3 Feb 2025, Krishtal et al., 2023). Recent advances unify and generalize classical methods (Dinkelbach, minorization–maximization, quadratic transform) for efficient algorithmic solution of FMPs involving sums or products of multiple fractional functions, often admitting matrix arguments and constraints.

1. Formal Definitions and Problem Classes

An FMP typically considers an optimization variable X\mathbf{X} (or a set of matrices {X1,,XN}\{\mathbf{X}_1, \dots, \mathbf{X}_N\}), aiming to extremize functions composed from multiple fractional functions (FFs):

  • Scalar FF: hmi({X})=fmi({X})gmi({X})h_{mi}(\{\mathbf{X}\}) = \frac{f_{mi}(\{\mathbf{X}\})}{g_{mi}(\{\mathbf{X}\})}, with fmi0f_{mi} \ge 0, gmi>0g_{mi} > 0.
  • Matrix-ratio FF: Ai(X)A_i(\mathbf{X}) and Ci(X)C_i(\mathbf{X}) are Hermitian, and objectives are traces of Tr[Ai(X)Ci(X)1]\mathrm{Tr}[A_i(\mathbf{X}) C_i(\mathbf{X})^{-1}].

The general forms include:

  • Minimization: min{X}Xm=1M0hm0({X})\min_{\{\mathbf{X}\} \in \mathcal{X}} \sum_{m=1}^{M_0} h_{m0}(\{\mathbf{X}\}), subject to m=1Mihmi({X})ηi\sum_{m=1}^{M_i} h_{mi}(\{\mathbf{X}\}) \leq \eta_i.
  • Maximization: max{X}Xm=1M0hm0({X})\max_{\{\mathbf{X}\} \in \mathcal{X}} \sum_{m=1}^{M_0} h_{m0}(\{\mathbf{X}\}), subject to m=1Mihmi({X})0\sum_{m=1}^{M_i} h_{mi}(\{\mathbf{X}\}) \geq 0. These may cover single or multiple ratios, sums or products of FFs, and support both scalar and matrix-valued numerators and denominators (Soleymani et al., 3 Feb 2025).

A canonical matrix-form FMP is: maxXXi=1KwiTr[Ai(X)Ci(X)1]\max_{X \in \mathcal{X}} \sum_{i=1}^K w_i\, \mathrm{Tr}[A_i(X)\,C_i(X)^{-1}] where Ai(X)S+riA_i(X) \in \mathbb{S}_+^{r_i}, Ci(X)S++riC_i(X) \in \mathbb{S}_{++}^{r_i}, and wi>0w_i > 0 (Shen et al., 13 Mar 2025).

2. Core Algorithmic Paradigms

Several algorithmic traditions underpin FMP solution methods:

  • Dinkelbach's Method: Classical approach for single-ratio, scalar-valued, concave/convex fractional programs. The method employs a root-finding scheme for x=argmaxxXf(x)/g(x)x^* = \arg\max_{x \in X} f(x)/g(x) and iteratively solves auxiliary programs until global optimality is established (Soleymani et al., 3 Feb 2025, Krishtal et al., 2023).
  • Generalized Dinkelbach (GDA): Extends to multiple ratios (e.g., minkfk(x)/gk(x)\min_k f_k(x)/g_k(x)), but typically admits only stationarity guarantees and involves a twin-loop algorithm, limiting practical scalability (Soleymani et al., 3 Feb 2025).
  • Quadratic Transform (QT) and Shen–Yu Algorithm: Converts each fractional term into an equivalent biconvex or block-convex surrogate. For a sum of ratios, this produces an augmented objective: fq(X,Y)=i=1KwiTr(2[YiHSi(X)]YiHCi(X)Yi)f_q(X,Y) = \sum_{i=1}^K w_i \mathrm{Tr} \left(2\, \Re[Y_i^H S_i(X)] - Y_i^H C_i(X) Y_i\right) Alternating optimization over XX and auxiliary variables YY (e.g., Yi=Ci(X)1Si(X)Y_i = C_i(X)^{-1} S_i(X)) yields monotonic ascent in objective value and converges to a stationary point (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
  • Single-loop MM Algorithms: The recent framework (Soleymani et al., 3 Feb 2025) extends MM-type single-loop update strategies to arbitrary sums, products, and matrix-valued FFs. These methods generate surrogate majorants/minorants ensuring monotonic convergence to stationary points, requiring only mild regularity (continuity).

3. Theoretical Properties and Complexity

The theoretical guarantees are closely tied to the class of the FMP and chosen algorithmic approach:

  • Stationary-point convergence: Alternating MM/QT approaches, under mild differentiability and convex-feasibility assumptions, converge to stationary points of the original FMP. For convex–convex quadratic scalar cases with single denominator, the Shen–Yu transform followed by global Dinkelbach checking ensures global maximization (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
  • Global-optimality certification: By combining local improvement loops (Shen–Yu/QT) with outer global checks (Dinkelbach root-finding), global maximizers can be reliably identified, especially for low-rank quadratic forms (Krishtal et al., 2023).
  • Computational Complexity:
    • QT/MM per-iteration cost: Dominated by O(Kri3)O(K r_i^3) matrix inversions and a convex (or closed-form) primal update (Shen et al., 13 Mar 2025).
    • Region-checking for low-rank quadratic fractional programs: At most 2r2^r regions if AA is rank rr, with polynomial complexity in nn when rr is small (Krishtal et al., 2023).
    • Recent single-loop MM achieves at least the same per-iteration complexity as GDA, with simpler coordination and, empirically, faster convergence (Soleymani et al., 3 Feb 2025).

4. Illustrative Applications

FMPs represent key structures across multiple domains:

  • Communications: Beamforming design, SINR maximization, latency/minimum delay with finite blocklength, energy efficiency (EE) maximization, and spectral–energy tradeoff in multi-user MIMO—often formulated as sums or products of ratios, possibly incorporating per-user or aggregate power constraints (Soleymani et al., 3 Feb 2025, Shen et al., 13 Mar 2025).
  • Radar and Sensing: CRB minimization under power constraints; the CRB is a trace-inverse of a Fisher information matrix, directly fitting FMP forms amenable to QT-based solution (Shen et al., 13 Mar 2025).
  • Machine Learning and Clustering: Normalized cut in graph clustering, SVM margin maximization, and portfolio optimization data, often written as ratio-type objectives or constraints (Shen et al., 13 Mar 2025, Krishtal et al., 2023).
  • Resource Allocation: Optimization under RIS-aided MU-MIMO scenarios and FBL coding constraints, where metrics such as sum-delay or geometric-mean EE are explicitly FMPs and handled by MM-based single-loop surrogates (Soleymani et al., 3 Feb 2025).

5. Comparative Strengths and Limitations

Method Handles Multi-Ratio? Matrix FFs? Global Optimum?
Dinkelbach No (single ratio) No Yes (scalar, convex-concave)
Generalized Dinkelbach Some (twin-loop) No Stationary point
Quadratic Transform / MM Yes Yes Stationary point (global with Dinkelbach)
QT+Region/Sign Decomp. Yes (low-rank) Some Yes, if full region checked

A notable limitation of classical approaches such as Dinkelbach and GDA is their inability to efficiently handle sums or products of multiple FFs, or matrix-valued FFs, especially with nonconvex numerators or denominators. The recent MM-based frameworks and QT surrogates surmount these limitations, generalizing to more complex FMPs with a single-loop structure and broader convergence guarantees (Soleymani et al., 3 Feb 2025, Shen et al., 13 Mar 2025).

6. Extensions, Generalizations, and Open Directions

Recent FMP research has generalized foundational algorithms to tackle:

  • Composite FMPs: Sums, products, or minimums over scalar/matrix FFs in objectives or constraints, even with nonconvexities (Soleymani et al., 3 Feb 2025).
  • Mixed packing and covering constraints: PTAS via multiplicative-weights, Lyapunov function approaches, and coupling of exponential potentials for primal–dual feasibility (0801.1987).
  • Dynamic/sequential FMPs: Warm-initiating variables from previous solutions for time-varying problems (0801.1987).
  • Alternating optimization for hybrid variable sets, e.g., beamformers and RIS phase shifts, leveraging block-MM for each set (Soleymani et al., 3 Feb 2025).

A plausible implication is ongoing research into extending global theory (beyond low-rank quadratic and sign-region enumeration) for general high-dimensional, nonconvex FMPs, especially in large-scale communications and ML systems.


References:

  • "Quadratic Transform for Fractional Programming in Signal Processing and Machine Learning" (Shen et al., 13 Mar 2025)
  • "A Framework for Fractional Matrix Programming Problems with Applications in FBL MU-MIMO" (Soleymani et al., 3 Feb 2025)
  • "On Low-Rank Convex-Convex Quadratic Fractional Programming" (Krishtal et al., 2023)
  • "A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs" (0801.1987)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fractional Matrix Programs (FMP).