Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fractional Programming (FP) Framework

Updated 28 December 2025
  • Fractional Programming (FP) is a mathematical optimization framework that models problems with objectives or constraints expressed as ratios of real-valued functions.
  • It leverages techniques like quadratic transforms and majorization-minimization to reformulate nonconvex problems into tractable forms with convergence guarantees.
  • FP drives efficient design and analysis in areas such as wireless communications, signal processing, and machine learning by addressing complex ratio-based performance metrics.

Fractional Programming (FP) is a mathematical optimization framework concerned with optimization problems in which the objective or constraint functions are ratios of real-valued functions. FP plays a foundational role in areas such as signal processing, wireless communications, machine learning, power network optimization, and resource allocation, where performance metrics such as SINR, spectral efficiency, mean-square error, and energy efficiency are often naturally expressed in fractional form. Over the past decade, a significant expansion of FP theory and methodology—driven by the development of the quadratic transform, matrix extensions, and connection to majorization-minimization (MM) theory—has enabled efficient and scalable solutions to a range of nonconvex, high-dimensional, and nonsmooth ratio problems.

1. Formulations and Taxonomy of Fractional Programming

The basic prototype of a fractional program is a single-ratio problem: maxxXf(x)g(x),\max_{x \in \mathcal X} \frac{f(x)}{g(x)}, where X\mathcal X is a feasible set, f(x)0f(x) \ge 0 and g(x)>0g(x) > 0. Under the concave–convex setting (ff concave, gg convex, X\mathcal X convex), the problem is quasi-concave and classical methods such as Dinkelbach’s transform or the Charnes–Cooper transform yield global solutions (Shen et al., 13 Mar 2025).

Extensions of the FP framework include:

  • Sum-of-ratios FP:

maxxXm=1Mfm(x)gm(x),\max_{x \in \mathcal X} \sum_{m=1}^M \frac{f_m(x)}{g_m(x)},

which is NP-hard even for linear data (Shen et al., 13 Mar 2025). These arise in spectral efficiency maximization, sum-delay minimization, fuel consumption minimization, and portfolio selection (Anam et al., 2023, Yang et al., 2024, Soleymani et al., 3 Feb 2025).

  • Matrix/Vector Ratio FP: Matrix-valued numerators and denominators, e.g.,

maxXm=1Mtr(AmXBmXH)tr(CmXDmXH),\max_X \sum_{m=1}^M \frac{\mathrm{tr}(A_m X B_m X^\mathrm{H})}{\mathrm{tr}(C_m X D_m X^\mathrm{H})},

commonly in MIMO communication, waveform design, and Cramér-Rao bound minimization (Shen et al., 2018, Shen et al., 13 Mar 2025, Wang et al., 9 Jul 2025).

  • Mixed Max-and-Min FP: Simultaneous maximization and minimization of different ratio terms, e.g., maximizing legitimate-users’ SINR and minimizing eavesdroppers’ SINR (Chen et al., 2023). The unified framework supports separable utilities F(x)=n=1N0fn+(ρn(x))+n=N0+1Nfn(ρn(x))F(x) = \sum_{n=1}^{N_0} f_n^+(\rho_n(x)) + \sum_{n=N_0+1}^N f_n^-(\rho_n(x)) with ρn(x)=An(x)/Bn(x)\rho_n(x) = A_n(x)/B_n(x).
  • Functional, nonsmooth, and semi-algebraic FP: Nonsmooth fractional programs over Hilbert spaces or with SOS-convex semi-algebraic numerators/denominators (Bot et al., 2016, Yang et al., 2024).
  • Generalized/Multi-block Fractional Matrix Programs (FMP): Objectives and constraints are arbitrary (possibly nonconvex, matrix-valued) fractional functions, including sums and products (Soleymani et al., 3 Feb 2025).

2. Algorithmic Structures: Quadratic Transform and MM Theory

The quadratic transform is a pivotal development permitting tractable reformulation of nonconvex ratio objectives (Shen et al., 2018, Shen et al., 13 Mar 2025, Shen et al., 2023). The essential mechanism is to introduce an auxiliary variable to decouple numerator and denominator: A(x)B(x)=maxy[2yA(x)y2B(x)],\frac{A(x)}{B(x)} = \max_{y} [2y\sqrt{A(x)} - y^2 B(x)], with y=A(x)/B(x)y^* = \sqrt{A(x)}/B(x) for fixed xx. When applied to the sum-of-ratios case, one augments the problem as: maxxi=1MAi(x)Bi(x)maxx,{yi}i=1M[2yiAi(x)yi2Bi(x)].\max_x \sum_{i=1}^M \frac{A_i(x)}{B_i(x)} \longleftrightarrow \max_{x, \{y_i\}} \sum_{i=1}^M [2y_i\sqrt{A_i(x)} - y_i^2 B_i(x)]. Alternating optimization is performed over {yi}\{y_i\} (closed form) and xx (usually a convex program for fixed yy, if data structure permits) (Shen et al., 13 Mar 2025, Rahman et al., 2023, Shen et al., 2023).

For optimization problems involving matrix fractions (e.g., MIMO or sensing), matrix generalizations of the quadratic transform and Lagrangian dual transform are employed, yielding auxiliary variables (matrices) that decouple the matrix ratio: tr(AXBXH)/tr(CXDXH)tr(AHY+YHAYHCXDXHY).\mathrm{tr}(A X B X^\mathrm{H})/\mathrm{tr}(C X D X^\mathrm{H}) \rightarrow \mathrm{tr}(\sqrt{A}^\mathrm{H} Y + Y^\mathrm{H}\sqrt{A} - Y^\mathrm{H} C X D X^\mathrm{H} Y). The MM viewpoint justifies that, at each iteration, the current surrogate minorizes the original objective and is tight at the current point, ensuring monotonic improvement and convergence to stationary points (Shen et al., 2018, Chen et al., 2023, Shen et al., 2023).

Variants and generalizations of MM, including nonhomogeneous quadratic bounding and block-coordinate updates, are employed for large-scale and high-dimensional problems to avoid matrix inversion or accelerate convergence (Shen et al., 2023, Chen et al., 2024, Wang et al., 9 Jul 2025).

3. Advanced Extensions: Mixed, Matrix, Nonsmooth, and Stochastic FP

  • Mixed max-min FP: The quadratic transform has been extended to treat minimization of ratios and unified max–min objectives using "inverse quadratic transforms" and mixed surrogates (Chen et al., 2023). For a min-FP term, the equivalent surrogate is

minxA(x)B(x)minx,y~1/[2y~B(x)y~2A(x)]+,\min_x \frac{A(x)}{B(x)} \rightarrow \min_{x, \tilde y} 1/[2\tilde y \sqrt{B(x)} - \tilde y^2 A(x)]_+,

with update y~=B(x)/(A(x)+ϵ)\tilde y = \sqrt{B(x)}/(A(x)+\epsilon). The unified MM framework alternates these updates.

  • Fractional Matrix Programs (FMP): The FMP framework supports arbitrary sums/products of fractional matrix functions in both objectives and constraints, leveraging single-loop MM-based surrogates—offering significant generality and efficiency compared to Dinkelbach-style or twin-loop GDA methods (Soleymani et al., 3 Feb 2025).
  • Proximal-gradient and block-proximal methods: Nonsmooth fractional problems with convex but possibly non-differentiable numerators/denominators are addressed through variants of the proximal-gradient algorithm and multi-proximity gradient frameworks, achieving convergence even for nonsmooth/bounded-denominator settings under the Kurdyka–Łojasiewicz property (Bot et al., 2016, Zhou et al., 2023).
  • Stochastic and large-scale FP: For expectations-of-log-ratio objectives, as in stochastic MIMO precoding, matrix-FP surrogates are used to lower-bound E[logI+AB1]E[\log|I+A B^{-1}|] and iteratively update auxiliary variables and precoders. Special surrogates and block optimization strategies are used to avoid cubic scaling in high-dimensional applications (Wang et al., 9 Jul 2025).
  • Semi-algebraic SOS-convex FP: When functions are given as SOS-convex semi-algebraic forms, strong duality permits solving the entire FP in a single semidefinite program, avoiding iterative procedures. The solution delivers both the achieving xx^* and the optimum objective via SDP duality (Yang et al., 2024).

4. Algorithmic Acceleration and Connections with First-order Methods

Recent works establish connections between FP-based MM iterations and gradient projection algorithms. Using nonhomogeneous majorizers, the xx-update in quadratic-transform–MM is shown to be equivalent to a projected gradient step on the original fractional objective: xk=ProjX{xk1+12λfo(xk1)}.x^{k} = \text{Proj}_{\mathcal X}\left\{ x^{k-1} + \frac{1}{2\lambda} \nabla f_o(x^{k-1}) \right\}. Nesterov's acceleration can then be incorporated, leading to improved rates O(1/k2)O(1/k^2) compared to the standard O(1/k)O(1/k) for local convergence (Shen et al., 2023, Chen et al., 2024). These accelerations provide substantial empirical speedup, especially in large-scale beamforming and sensing applications where per-iteration complexity matters.

5. Applications: Communications, Signal Processing, and Machine Learning

Fractional programming underpins a vast range of applied optimization problems, including:

6. Theoretical Guarantees and Convergence

Monotonicity and convergence to a stationary (KKT) point are fundamental guarantees of alternating quadratic-transform and MM-based FP iterations under mild regularity assumptions. For problems with concave–convex structure, global optimality holds for single-ratio objectives; for multi-ratio or nonconvex problems, all limit points are stationary. When Kurdyka–Łojasiewicz property is shown, as in certain block-proximal or semi-algebraic programs, global convergence is established, sometimes at a linear rate (KL exponent $1/2$) (Zhou et al., 2023, Bot et al., 2016).

Acceleration via Nesterov-type extrapolation is theoretically justified for smooth differentiable objectives, yielding faster convergence both in theory and in practice (Shen et al., 2023, Chen et al., 2024).

7. Extensions, Practical Considerations, and Open Directions

Recent FP frameworks extend to:

Practical implementations universally rely on standard convex optimization tools (CVX, MOSEK, SDPT3), with problem-specific structural exploitation (e.g., waterfilling, bipartite matching). FP-based surrogates are robust to initialization and scale efficiently to high-dimensional scenarios.

Ongoing research seeks to extend FP methodologies to adversarially robust settings, multi-objective tradeoff optimization (e.g., joint latency–energy), hybrid continuous/discrete networking, RIS/STAR-aided architectures, and cross-layer design with FP-encoded physical/MAC layer coupling (Soleymani et al., 3 Feb 2025, Wang et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fractional Programming (FP) Framework.