Three-Point Iterative Method
- Three-point iterative methods are algorithmic techniques that combine three successive point evaluations to produce highly accurate approximations for solving nonlinear equations and fixed-point problems.
- They achieve optimal convergence orders, as demonstrated by eighth-order methods that meet the Kung–Traub conjecture, thereby reducing iterations and enhancing computational efficiency.
- These methods are versatile, extending to derivative-free approaches, stochastic strategies, systems of nonlinear equations, and optimization, effectively addressing various numerical challenges.
A three-point iterative method is an algorithmic framework that constructs each new iterate as a function of the current and two previous points, or via a staged process using three function evaluations per cycle. Such methods are central to root-finding for nonlinear equations, nonlinear systems, unconstrained optimization, and fixed-point acceleration, where they often achieve superior convergence rates compared to classical one- or two-point schemes. Three-point methods include families attaining highest possible convergence order for a given number of function evaluations (the Kung–Traub bound), as well as derivative-free and stochastic strategies for scenarios where derivative information is unavailable or unreliable.
1. General Structure and Theoretical Limits
Three-point iterative methods seek solutions to equations of the type (scalar or vector ), fixed-point problems %%%%2%%%%, or unconstrained minimizations . The essential feature is the use, at each main iteration, of information from three points—typically —to construct the next approximation .
The optimal possible convergence order for a multipoint method without memory is constrained by the Kung-Traub conjecture: for total function evaluations per iteration, the maximal order is . For three-point schemes with one derivative evaluation, yields the order bound $8$ (Matthies et al., 2016, Jaiswal et al., 2013, Matthies et al., 2015).
2. Scalar Nonlinear Equation Solvers: High-Order Schemes
Several eighth-order, three-point methods for scalar nonlinear equations have been developed adhering to the Kung–Traub conjecture. Representative algorithms include:
General structure (from (Matthies et al., 2016, Jaiswal et al., 2013, Matthies et al., 2015)):
- Step 1:
- Step 2: (where is a rational expression designed for high-order cancellation)
- Step 3:
Here, and are weight functions, possibly parameterized, calibrated so that Taylor series expansion about the root leads to cancellation of the error expansion up to the term. For instance, in (Matthies et al., 2016), one uses divided differences in the last step, confirming eighth-order convergence with four functional evaluations.
Performance metrics, such as computational order of convergence (COC) and efficiency index ( = order, = evaluations per step), consistently show that three-point, eighth-order methods (e.g., (Matthies et al., 2015, Matthies et al., 2016, Jaiswal et al., 2013)) are optimal in the sense of the Kung–Traub limit, with , outperforming Newton ().
Numerical tests across diverse nonlinear equations consistently yield rapid reduction in residuals: with typical errors dropping from to below in two iterations (Matthies et al., 2015).
3. Derivative-Free and Accelerated Root-Finding
When derivatives are unavailable or unreliable, three-point iterative procedures can still achieve robust performance. Two main classes are:
- Modified Secant-type methods: The three-point Secant method (Tiruneh, 2019) achieves a convergence order , matching Müller’s cubic interpolation, by using three previous iterates and only function values. The update is
with appropriate divided differences , outperforming classical two-point Secant (order ) and providing greater robustness than Newton in ill-conditioned cases.
- Least-squares curve-fitting methods: The three-point least-squares method (Tiruneh et al., 2013) fits a model through three equispaced points near , adapting dynamically based on finite-difference estimates. This approach attains quadratic convergence, matching Newton’s order, while being derivative-free; however, it incurs a higher per-iteration function evaluation cost.
4. Systems of Nonlinear Equations
Three-point methods have been extended to vector equations , where . The scheme of (R. et al., 18 Jan 2026) generalizes high-order scalar schemes to vector-valued settings without requiring higher derivatives. Iterations involve repeated Jacobian solves, but attain sixth-order convergence: Convergence analysis via Taylor expansion verifies order 6, with per-iteration cost dictated by function and Jacobian evaluations and linear system solves. The method is more efficient (in the Ostrowski sense) than competing fifth-order methods for large (R. et al., 18 Jan 2026).
5. Optimization and Fixed-Point Acceleration
Three-point strategies have been successfully applied in optimization and fixed-point iterations, enhancing convergence while maintaining low memory and computational overhead.
- Three-point Barzilai–Borwein (TBB) for Unconstrained Optimization: The TBB method (Qingying et al., 2022) uses step-size choices based on least-squares fitting of three-point secant equations for the gradient, coupled with relaxed Armijo backtracking. It achieves global convergence (for uniformly continuous gradient), linear or superlinear convergence (with proper structure), and empirically outperforms classical two-point Barzilai–Borwein in large-scale quadratics and structured problems.
- Stochastic Three Points (STP) for Zeroth-Order Optimization: STP (Bergou et al., 2019) is a randomized, derivative-free approach suitable for black-box smooth minimization. Each iteration, it selects the best among (for random ). For convex and nonsmooth problems, STP achieves the best known dependence on problem dimension and tolerance : complexity for nonconvex, for convex problems.
- Three-point Polynomial Accelerator (TPA) for Fixed-Point Maps: The TPA (Alemanno, 12 Nov 2025) augments fixed-point schemes by fitting the contraction factor from residual dynamics, forming a quadratic blend of the last three iterates to annihilate the slowest-decaying mode. This dramatically reduces the number of map evaluations needed to reach a target tolerance in linear and mildly nonlinear systems and outperforms Picard, SOR, and shallow Anderson acceleration, with minimal additional memory and computational cost.
6. Application-Specific Three-Point Methods
Three-point iterative techniques are critical in demanding engineering scenarios, notably high-precision solutions to the Colebrook equation for turbulent pipe friction. In (Praks et al., 2018), several three-point methods—Džunić–Petković–Petković, Bi–Ren–Wu, Sharma–Arora, etc.—achieve eighth-order convergence for the Colebrook equation,
with just two iterations typically reducing residuals beneath even with poor initialization. Decisions between methods can hinge on algebraic simplicity, robustness near parameter boundaries (e.g., low roughness), and whether analytic derivatives are available.
A summary table of major three-point schemes for the Colebrook equation:
| Method | Convergence Order | Key Features |
|---|---|---|
| Džunić–Petković–Petković | 8 | Fewest iterations, robust |
| Bi–Ren–Wu | 8 | Bi-quadratic, robust |
| Sharma–Arora, Sharma–Sharma | 8 | Divided-difference structure |
| Neta, Chun–Neta | 6 | Lower algebraic cost |
| Jain (Steffensen-type) | 4 | Derivative-free |
7. Multivalued Mappings and Banach Space Iterations
Three-point iterative processes extend to Banach space settings, particularly for multivalued mappings satisfying mild nonexpansivity conditions. The scheme of (Eslamian et al., 2011) iterates via convex combinations of images under three multivalued maps, with bounded errors and adaptive weights. Strong convergence theorems are established under uniform convexity, Suzuki's condition (C), and summability of error sequences. This approach unifies and generalizes a broad class of iterative methods for fixed-point and split-operator equations in analysis and applied mathematics.
Three-point iterative methods constitute a central class of advanced numerical algorithms across nonlinear equations, systems, optimization, and fixed-point problems. Their design exploits carefully crafted multistage updates—targeting error cancellation at increasingly higher orders—allowing them to achieve the theoretical optimal for order per function evaluation, adapt to challenges such as derivative unavailability or ill-conditioning, and deliver superior convergence in both local and global analyses. Empirical evidence and extensive theoretical work establish their status as the leading choice for high-accuracy, efficient solutions in diverse computational fields.