Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fixed Point Formulation

Updated 15 April 2026
  • Fixed Point Formulation is a mathematical strategy that recasts complex problems into fixed point equations using operators such as nonexpansive, firmly nonexpansive, and contractive mappings.
  • It underpins various applications including signal recovery, constrained optimization, power flow, and quantum theory by unifying iterative algorithms and convergence analysis.
  • FPF algorithms employ block-iterative, relaxation, and extrapolation techniques, enabling parallelism and scalable solutions with strong numerical robustness.

A fixed point formulation (FPF) is a mathematical strategy in which the solution to a problem is characterized as a point that remains invariant under a specific operator or map. In practice, this often enables the reformulation of analytic or computational problems—such as signal recovery, nonlinear inverse problems, constrained optimization, computational physics, and quantum theory—into tractable fixed point problems involving nonexpansive, firmly nonexpansive (FNE), or contractive mappings. The FPF paradigm provides a unified framework for algorithm design, convergence analysis, and operator-splitting, and underpins numerous modern computational methods in applied mathematics, engineering, and physics.

1. Mathematical Foundations and Operator Framework

A fixed point for a map T:XXT:\mathcal{X}\to\mathcal{X} (with X\mathcal{X} a finite-dimensional Hilbert space or Banach space) is a point xXx^*\in\mathcal{X} such that T(x)=xT(x^*)=x^*. The fundamental problem is thus

Find xX such that x=T(x).\text{Find } x^*\in\mathcal{X} \text{ such that } x^* = T(x^*).

Fixed point theorems, including the Banach contraction theorem for contractive maps and extensions for nonexpansive or averaged operators, underlie the convergence properties of iterative fixed-point schemes.

Of particular importance are the firmly nonexpansive (FNE) operators satisfying

TxTy2+(IT)x(IT)y2xy2\|T x - T y\|^2 + \|(I-T)x-(I-T)y\|^2 \le \|x-y\|^2

and averaged operators, for which T=(1α)I+αQT=(1-\alpha)I + \alpha Q with QQ nonexpansive and α(0,1)\alpha\in(0,1). Such operators guarantee robust convergence for key algorithms (e.g., Picard, Krasnoselʹskiǐ–Mann, Douglas–Rachford, and forward–backward splittings) (Combettes et al., 2020).

2. Signal Recovery via Fixed Point Formulation

The classical signal-recovery FPF, as developed by Combettes & Woodstock, targets problems of recovering xXx\in\mathcal{X} from nonlinear transformations X\mathcal{X}0 and convex priors X\mathcal{X}1. The nonlinear constraint system

X\mathcal{X}2

is typically nonconvex or intractable. The core FPF insight is to associate each X\mathcal{X}3 with a surrogate X\mathcal{X}4 such that X\mathcal{X}5 is FNE, resulting in a family of FNE operators X\mathcal{X}6:

  • For measurement constraints (X\mathcal{X}7): X\mathcal{X}8, X\mathcal{X}9
  • For simple convex priors (xXx^*\in\mathcal{X}0): xXx^*\in\mathcal{X}1 (orthogonal projector, FNE)
  • For convex-inequality priors (xXx^*\in\mathcal{X}2): subgradient projection

The problem reduces to the common fixed point system:

xXx^*\in\mathcal{X}3

solved by a block-iterative, extrapolated scheme with proven Fejér monotonicity convergence under appropriate control (Combettes et al., 2020).

3. Operator Splitting, Algorithms, and Convergence

Many FPF-based algorithms derive from operator splitting and are classified by the regularity of the mapping:

  • For strict contractions, Picard iteration converges linearly.
  • For nonexpansive or averaged maps, Krasnoselʹskiǐ–Mann and Douglas–Rachford iterations provide weak or strong convergence to common fixed points (Combettes et al., 2020).
  • For block or coordinate decompositions, as in convex feasibility or large-scale data science problems, the FPF allows for random or greedy block updates, enabling parallelism or asynchronous computation.
  • In signal recovery, block-iterative schemes alternate over measurement and prior blocks, using extrapolated steps adapted to current residuals (Combettes et al., 2020).

In nonconvex, nonlinear, or noisy settings, e.g., in feasibility-based fixed point networks (F-FPNs), operators may be constructed as compositions xXx^*\in\mathcal{X}4, where xXx^*\in\mathcal{X}5 is a nonexpansive data-consistency projection and xXx^*\in\mathcal{X}6 is a (possibly learned) nonexpansive regularization operator, ensuring global convergence (Heaton et al., 2021).

4. Application Domains: Inverse Problems, Optimization, and Physics

FPF methodologies span a variety of domains:

Inverse Problems and Imaging: Recovery from nonlinear, clipped, or quantized measurements (e.g., thresholded scalar products, clipped low-pass data) via common fixed points of FNEs (Combettes et al., 2020). F-FPN architectures learn nonexpansive regularizers directly from data and iterate until joint feasibility with measurement constraints is achieved (Heaton et al., 2021).

Constrained Optimization: In constrained minimization problems, FPF strategies treat multipliers as variables in a fixed point map xXx^*\in\mathcal{X}7 arising from an augmented Lagrangian xXx^*\in\mathcal{X}8, circumventing the need for saddle-point or dual ascent methods and guaranteeing convergence under convexity and coercivity. In the convex case, fixed points of xXx^*\in\mathcal{X}9 coincide exactly with Karush–Kuhn–Tucker solutions (Pedregal, 2014).

Power Systems: Both classic and multidimensional AC power flow problems admit FPF formulations. Single-bus updates are cast as intersection-of-circles problems, and global mappings T(x)=xT(x^*)=x^*0 are solved using fixed point iteration, with guaranteed convergence to the high-voltage branch under suitable impedance/load ratios (Guddanti et al., 2018, Duque et al., 2024). Tensorized versions scale to massive multi-scenario simulations.

Computational Physics: In variational multiscale FEM for Navier–Stokes, Picard FPF approaches yield linearly convergent, stabilized solvers at moderate Reynolds number, although they may fail at high-Re (0806.3514).

Quantum Theory: FPFs underpin time-symmetric representations of quantum probability, replacing initial value problems with a universal Keldysh-contour wavefunction and fixed point constraints at all measurement events (Ridley, 2021, Ridley et al., 2023). This formalism realizes both time and event symmetry and admits event-by-event quantum histories as sequences of fixed points.

State-Space Smoothing: In linear Gaussian models, new FPFs produce numerically robust fixed-point smoothers (e.g., for T(x)=xT(x^*)=x^*1) by leveraging recursive affine–Gaussian updates and Cholesky-based factor propagation, achieving high stability and low-memory complexity compared to fixed-interval smoothers or state augmentation strategies (Krämer, 2024).

5. Structure of Fixed Point Algorithms: Abstract and Concrete Strategies

The FPF framework encompasses several classes of algorithms, all centered on the construction and analysis of (possibly composite) operator maps:

  • General structure:
    • Initialize T(x)=xT(x^*)=x^*2
    • Iterate T(x)=xT(x^*)=x^*3 (simple or composite operator T(x)=xT(x^*)=x^*4)
    • Use block or coordinate updates, possibly randomized or weighted
    • Apply relaxation or acceleration (e.g., inertial steps, Anderson acceleration)
  • Convergence guarantees: Contractive or FNE maps ensure global convergence; with additional block-activation or control conditions, block-iterative schemes converge to common fixed points.

Table: Comparison of FPF Use Cases and Key Properties

Domain Operator Type Notable Properties
Signal recovery FNE, block FNE Block-iterative, Fejér monotonic
Optimization Exponential-multiplier Contractive (under convexity)
Power flow Lipschitz/contraction High-voltage selection, scalability
Quantum mech. Projective FPF Time/event symmetry, unitarity
Data science Averaged, monotone Splitting, stochasticity, non-Euclidean
State-space Affine/Cholesky FPF Memory-optimal, numerically robust

6. Theoretical and Practical Implications

The FPF paradigm enables:

  • Reduction of nonlinear, nonconvex, or composite problems to tractable operator equations.
  • Modular algorithm design: new constraints or learnable regularizers can be integrated without sacrificing convergence theory, provided nonexpansiveness is preserved (Heaton et al., 2021).
  • Decoupling of prior, data, and algebraic components for parallelization and scalability, as demonstrated in power flow and large-scale imaging (Duque et al., 2024).
  • Strong numerical robustness, especially via Cholesky-based recursions in smoothing or filtering (Krämer, 2024).
  • Realization of structural symmetries (e.g., event and time symmetry) at the ontological level, as in quantum FPF models (Ridley, 2021, Ridley et al., 2023).

A plausible implication is that FPF strategies subsume or generalize many traditional optimization, numerical, and probabilistic inference algorithms, serving as the backbone for contemporary large-scale computational methods.

7. Representative Examples and Empirical Results

Concrete instantiations demonstrate FPF versatility:

  • Restoration from clipping and nonlinear spectral distortion (full block, 3-cycle relaxation) converges robustly (Combettes et al., 2020).
  • AC power flow via intersection-of-circles FPF yields exponential convergence to high-voltage solutions even for poor initializations, outperforming Newton–Raphson at scale (Guddanti et al., 2018).
  • Tensorized power-flow fixed point methods deliver up to T(x)=xT(x^*)=x^*5 speedup for year-long (525,600 scenario) simulations compared to sparse NR, with built-in selection of the desired operational branch (Duque et al., 2024).
  • In fixed-point smoothing, new Cholesky-based algorithms match the runtime of the fastest conventional techniques while surpassing prior fixed-point or augmented-state Kalman smoothers in both memory efficiency and numerical robustness (Krämer, 2024).

References:

  • "A Fixed Point Framework for Recovering Signals from Nonlinear Transformations" (Combettes et al., 2020)
  • "Fixed Point Strategies in Data Science" (Combettes et al., 2020)
  • "Power Flow as Intersection of Circles: A new Fixed Point Method" (Guddanti et al., 2018)
  • "Tensor Power Flow Formulations for Multidimensional Analyses in Distribution Systems" (Duque et al., 2024)
  • "Constrained optimization through fixed point techniques" (Pedregal, 2014)
  • "Feasibility-based Fixed Point Networks" (Heaton et al., 2021)
  • "Consistent Newton-Raphson vs. fixed-point for variational multiscale formulations for incompressible Navier-Stokes" (0806.3514)
  • "Numerically Robust Fixed-Point Smoothing Without State Augmentation" (Krämer, 2024)
  • "Quantum probability from temporal structure" (Ridley, 2021)
  • "Time and event symmetry in quantum mechanics" (Ridley et al., 2023)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fixed Point Formulation (FPF).