Fractional Programming Overview
- Fractional programming is an optimization framework where objectives are expressed as ratios of functions, enabling precise trade-off modeling.
- Key solution methods include Dinkelbach’s algorithm, Charnes–Cooper transformation, and quadratic transforms that reformulate nonconvex problems into tractable surrogates.
- Its wide-ranging applications in signal processing, wireless communications, machine learning, and finance highlight its practical value in efficiently balancing performance metrics.
Fractional programming (FP) is a class of mathematical optimization concerned with problems in which the objective and possibly the constraints contain functions that are ratios of other functions. Such programs appear pervasively in signal processing, wireless communications, machine learning, combinatorial optimization, energy systems, operations research, and finance due to the prevalence of quantities such as SINR, energy efficiency, Sharpe ratio, Cramér–Rao bound, and data throughput per resource. FP directly models problems where trade-offs or efficiencies are naturally expressed as ratios, enabling formulation and solution strategies aligned with underlying physical or operational principles.
1. Fundamental Fractional Programming Problem Classes
The canonical FP problem seeks to optimize an objective involving ratios: with and , and possibly further constraints. Key classes include:
- Single-ratio optimization: Maximize or minimize .
- Max–min-ratio: Maximize .
- Sum-of-ratios (Multi-ratio): Maximize or other combinations.
- Sum-of-log-ratios: prominent in network information theory and resource allocation.
- Matrix-ratio problems: Optimization of trace, determinant, or other matrix functionals of the form common in MIMO, precoding, or sensing (Shen et al., 13 Mar 2025).
Scalar single-ratio and max–min-ratio problems with certain convexity/concavity properties admit efficient global solution techniques (notably, Charnes–Cooper and Dinkelbach’s methods), whereas multi-ratio, sum-of-ratios, and matrix-ratio instances are generally NP-hard and only local or approximate solutions are tractable (Shen et al., 13 Mar 2025).
2. Transformations and Algorithmic Frameworks
Several foundational transformations have been developed to reformulate nonconvex ratio objectives into tractable surrogates:
2.1 Dinkelbach’s and Charnes–Cooper Methods
- Dinkelbach’s Algorithm: Converts single-ratio into a sequence of parametric convex programs by iteratively solving and updating (Gorissen, 2015, Bot et al., 2016). This approach extends to robust and stochastic settings by embedding the parameter search in an outer loop (Gorissen, 2015).
- Charnes–Cooper Transformation: For linear or convex-quasiconvex cases, rewrites the problem as a convex program after homogenization, leveraging variable substitution to linearize the ratio (Bot et al., 2016).
2.2 Quadratic Transform (QT)
The quadratic transform is central for multi-ratio, sum-of-ratios, and matrix-valued problems (Shen et al., 2018, Shen et al., 13 Mar 2025, Chen et al., 2023): This equivalence lifts each ratio to a biconvex surrogate and exposes a block coordinate ascent structure: alternating updates of . When applied to ratios, auxiliary variables are introduced, producing a sum of concave-quadratic surrogates in block variables.
The transform admits analogues for the minimization case and for matrix-ratio functions, resulting in surrogate problems with closed-form auxiliary updates and tractable alternating optimization (Shen et al., 2018, Chen et al., 2023, Shen et al., 13 Mar 2025).
2.3 Lagrangian Dual and Generalized Transforms
For objectives involving compositions such as sums of logarithms of ratios, a Lagrangian dual or multiplier transform is used (Shen et al., 2018, Chen et al., 2023): This step is typically followed by the quadratic transform, yielding block-coordinate updates in , , and .
2.4 Minorization–Maximization (MM) and Block Coordinate Descent
The above transforms produce surrogates that satisfy the MM principle: the surrogate function is a global lower bound (for maximization problems) that coincides with the original objective at the current iterate. The optimization proceeds by alternately maximizing over different blocks, producing nondecreasing objective sequences and guaranteeing convergence to stationary points under standard regularity conditions (Shen et al., 2018, Shen et al., 2023).
2.5 Extensions: Acceleration, Nonhomogeneous and Manifold Techniques
- Nesterov-style acceleration: By viewing the QT -step as gradient projection, Nesterov’s extrapolation delivers quadratic rate improvements, reducing iteration complexity from to for target -accuracy (Shen et al., 2023).
- Nonhomogeneous transforms: Enable elimination of large matrix inverses for massive MIMO, further lowering per-iteration cost (Zhu et al., 6 Jan 2026, Wang et al., 9 Jul 2025).
- Manifold optimization: For fractional objectives defined over matrix manifolds (e.g., unitary or Stiefel constraints), transforms are combined with Riemannian optimization to solve problems such as RIS scattering matrix design (Fidanovski et al., 10 Nov 2025).
3. Theoretical Properties and Convexification
3.1 Convergence and Optimality
- Single-ratio and certain max–min problems: Strong duality and global optimality hold when the numerator is convex, the denominator is concave, and the feasible set is convex (Gorissen, 2015, Bot et al., 2016).
- Multi-ratio/sum-of-ratios and matrix ratios: For general (NP-hard) classes, the quadratic transform with MM guarantees stationary-point convergence but not global optimality (Shen et al., 13 Mar 2025, Wang et al., 9 Jul 2025, Boţ et al., 2023).
- Robust FP: Robust optimization can be integrated directly, with conditions that guarantee reduction to a convex or iterative convex program (Gorissen, 2015).
3.2 Convexification and Convex Hull Tightening
Advanced convexification techniques enable tighter relaxations for discrete and polynomial FP:
- Projective liftings: Relate the convex hull of fractional functions to the convex hull of their polynomial analogues (He et al., 2023).
- Boolean quadric polytopes: Exact and tight relaxations for 0–1 ratio-of-affines, using BQP inequalities (McCormick, triangle, odd-cycle) to strengthen relaxations for binary variables (He et al., 2023).
- Copositive programming: Used for ratio-of-quadratics and conic quadratic programming in high-dimensional cases (He et al., 2023).
- Moment-hull representations: Moments-based SDP formulations for univariate fractional polynomials yield strong relaxations and exactness for small problem instances (Yang et al., 2024, He et al., 2023).
3.3 Splitting and Proximal Methods
Recent work has introduced operator-splitting and proximal schemes for FP with nonsmooth, composite, and nonconvex structure:
- Proximal-gradient methods for in Hilbert spaces—proximal step for , gradient step for —with global convergence under concave denominators, and criticality under convex denominators and KL (Bot et al., 2016).
- Full-splitting, adaptive, and nonmonotone-line-search algorithms for nonconvex and nonsmooth FPs with composed linear operators achieve subsequential and, with the KL property, global convergence (Boţ et al., 2023).
- Parameter-free SDP relaxations yield global solutions for sum-of-squares-convex semi-algebraic FPs via a single SDP (Yang et al., 2024).
4. Extensions: Mixed, Stochastic, and Robust FP
4.1 Mixed Max-and-Min FP
Problems with interleaved maximization and minimization over ratios (e.g., maximizing legitimate receiver SINR while minimizing eavesdropper SINR) are handled by a unified extension of the quadratic transform, yielding joint surrogates for both objectives with MM convergence (Chen et al., 2023).
4.2 Stochastic and Robust Settings
- Stochastic FP: Ergodic-sum-rate or expectation-constrained FP arises in MIMO precoding under channel uncertainty. Direct application of FP inside expectation is infeasible; instead, exchanging the order of expectation and surrogate yields a tractable lower-bound amenable to block-MM updates (Wang et al., 9 Jul 2025).
- Robust FP: Extends FP to uncertainty in numerator, denominator, and constraints, with single-shot or iterative convexification depending on independence/structure of uncertainty (Gorissen, 2015).
5. Applications in Communications, Machine Learning, and Engineering
5.1 Communication Systems
- Beamforming and power control: FP, and in particular the quadratic transform, underpins algorithms for max-rate, min-power, and energy-efficient beamforming in MISO/MIMO, NOMA, RIS, and D2D networks, including robust designs under imperfect channel state information (Shen et al., 2018, Iimori et al., 2020, Fidanovski et al., 10 Nov 2025, Zhu et al., 6 Jan 2026).
- Scheduling and resource allocation: Discrete and mixed-integer FP (e.g., user association, offloading, matching) utilize FP surrogates in conjunction with combinatorial optimization, matching, or penalty-based relaxations (Shen et al., 2018, Wang et al., 2023).
5.2 Signal Processing and Machine Learning
- SVM and normalized cut: Margins, robust ratios, and spectral clusterings are cast as FP and solved via QT or MM (Shen et al., 13 Mar 2025).
- Kullback–Leibler divergence optimization: Sensing and detection waveform design with KLD objectives is accelerated from cubic to quadratic per-iteration complexity via FP + nonhomogeneous relaxation, yielding order-of-magnitude runtime improvements (Park et al., 2 Jan 2026).
- Graph clustering and combinatorial biclustering: Fractional ratio objectives in graph cuts and biclustering are tackled through matrix QT, BQP relaxations, and specialized branch-and-bound (Utkina et al., 2016, He et al., 2023).
5.3 Energy Systems and Operations Research
- Fuel efficiency and power systems: Deployment of FP for large-scale sum-of-ratios objectives in fuel consumption, dispatch, and resource allocation ensures convergence and scalability unattainable for direct NLP approaches (Anam et al., 2023).
- SOS-convex and generalized FPs: SDP relaxations and moment-based representations provide tractable and globally optimal solutions for classes of nonconvex algebraic FPs (Yang et al., 2024).
6. Practical, Numerical, and Complexity Considerations
- Per-iteration cost: Classical FP often requires matrix inversions; nonhomogeneous relaxations and deep-unfolded approaches have eliminated cubic complexity in massive MIMO and large architectures (Zhu et al., 6 Jan 2026, Wang et al., 9 Jul 2025).
- Parallelizability and decentralization: The block-coordinate and auxiliary-variable structure of FP surrogates admit parallel and distributed implementations, significant for large networks (Shen et al., 2023).
- Convergence rates: QT and block-MM typically yield objective gap decay; Nesterov-style acceleration and STEM-type fixed-point schemes yield or superlinear convergence (Shen et al., 2023, Park et al., 2 Jan 2026).
7. Directions for Theory and Open Problems
- Global optimality in NP-hard classes: Integrating hierarchy-based relaxations (SOS, moment, RLT) with FP surrogates to close the relaxation gap for multi-ratio and discrete FPs remains an open challenge (He et al., 2023, Yang et al., 2024).
- Integration with learning: Deep-unfolded FP architectures (as in DeepFP) bridge physics-inspired and data-driven paradigms for beamforming, resource allocation, and detection in high-dimensional nonconvex scenarios (Zhu et al., 6 Jan 2026).
- Stochastic, online, and multi-agent FP: Open problems include streaming and stochastic cases, dynamic problem data, and extensions to decentralized or federated optimization (Shen et al., 2023, Wang et al., 9 Jul 2025).
- Bilevel and multi-objective FP: Unifying FP in hierarchical or multi-fidelity contexts.
In summary, FP unifies the modeling and solution of optimization problems characterized by fractional structures, from classical cases to modern multilayered, mixed, matrix, and learning-accelerated paradigms. Central innovations, such as the quadratic transform, MM/MM-based alternations, convexification hierarchies, and joint optimization with machine learning, provide a comprehensive methodical and algorithmic foundation capable of addressing many of the most intricate problems seen in contemporary engineering, statistics, and data science (Shen et al., 2018, Shen et al., 13 Mar 2025, Shen et al., 2023, Chen et al., 2023, Zhu et al., 6 Jan 2026).