Papers
Topics
Authors
Recent
Search
2000 character limit reached

Three-Point Iterative Method

Updated 24 January 2026
  • Three-point iterative methods are algorithmic techniques that combine three successive point evaluations to produce highly accurate approximations for solving nonlinear equations and fixed-point problems.
  • They achieve optimal convergence orders, as demonstrated by eighth-order methods that meet the Kung–Traub conjecture, thereby reducing iterations and enhancing computational efficiency.
  • These methods are versatile, extending to derivative-free approaches, stochastic strategies, systems of nonlinear equations, and optimization, effectively addressing various numerical challenges.

A three-point iterative method is an algorithmic framework that constructs each new iterate as a function of the current and two previous points, or via a staged process using three function evaluations per cycle. Such methods are central to root-finding for nonlinear equations, nonlinear systems, unconstrained optimization, and fixed-point acceleration, where they often achieve superior convergence rates compared to classical one- or two-point schemes. Three-point methods include families attaining highest possible convergence order for a given number of function evaluations (the Kung–Traub bound), as well as derivative-free and stochastic strategies for scenarios where derivative information is unavailable or unreliable.

1. General Structure and Theoretical Limits

Three-point iterative methods seek solutions to equations of the type f(x)=0f(x) = 0 (scalar or vector xx), fixed-point problems %%%%2%%%%, or unconstrained minimizations minf(x)\min f(x). The essential feature is the use, at each main iteration, of information from three points—typically xn,yn,znx_n, y_n, z_n—to construct the next approximation xn+1x_{n+1}.

The optimal possible convergence order for a multipoint method without memory is constrained by the Kung-Traub conjecture: for k+1k+1 total function evaluations per iteration, the maximal order is 2k2^k. For three-point schemes with one derivative evaluation, k+1=4k+1=4 yields the order bound $8$ (Matthies et al., 2016, Jaiswal et al., 2013, Matthies et al., 2015).

2. Scalar Nonlinear Equation Solvers: High-Order Schemes

Several eighth-order, three-point methods for scalar nonlinear equations have been developed adhering to the Kung–Traub conjecture. Representative algorithms include:

General structure (from (Matthies et al., 2016, Jaiswal et al., 2013, Matthies et al., 2015)):

  • Step 1: yn=xnf(xn)f(xn)y_n = x_n - \frac{f(x_n)}{f'(x_n)}
  • Step 2: zn=ynB(f(xn),f(yn),f(xn))z_n = y_n - \mathcal{B}(f(x_n), f(y_n), f'(x_n)) (where B\mathcal{B} is a rational expression designed for high-order cancellation)
  • Step 3: xn+1=znC(f(zn),f(xn),...,J,G)x_{n+1} = z_n - \mathcal{C}(f(z_n), f'(x_n), ..., J, G)

Here, JJ and GG are weight functions, possibly parameterized, calibrated so that Taylor series expansion about the root xx^* leads to cancellation of the error expansion up to the O((xnx)8)O((x_n-x^*)^8) term. For instance, in (Matthies et al., 2016), one uses divided differences in the last step, confirming eighth-order convergence with four functional evaluations.

Performance metrics, such as computational order of convergence (COC) and efficiency index E=p1/mE = p^{1/m} (pp = order, mm = evaluations per step), consistently show that three-point, eighth-order methods (e.g., (Matthies et al., 2015, Matthies et al., 2016, Jaiswal et al., 2013)) are optimal in the sense of the Kung–Traub limit, with E1.68179E \simeq 1.68179, outperforming Newton (E1.414E \simeq 1.414).

Numerical tests across diverse nonlinear equations consistently yield rapid reduction in residuals: with typical errors dropping from 10410^{-4} to below 103010^{-30} in two iterations (Matthies et al., 2015).

3. Derivative-Free and Accelerated Root-Finding

When derivatives are unavailable or unreliable, three-point iterative procedures can still achieve robust performance. Two main classes are:

  • Modified Secant-type methods: The three-point Secant method (Tiruneh, 2019) achieves a convergence order 1.83929\approx 1.83929, matching Müller’s cubic interpolation, by using three previous iterates and only function values. The update is

xk+1=xk2yk2D12D12D23x_{k+1} = x_{k-2} - y_{k-2} \frac{D_{12}}{D_{12} - D_{23}}

with appropriate divided differences DijD_{ij}, outperforming classical two-point Secant (order 1.618\approx 1.618) and providing greater robustness than Newton in ill-conditioned cases.

  • Least-squares curve-fitting methods: The three-point least-squares method (Tiruneh et al., 2013) fits a model P(x)=a(xb)NP(x) = a(x-b)^N through three equispaced points near xkx_k, adapting NN dynamically based on finite-difference estimates. This approach attains quadratic convergence, matching Newton’s order, while being derivative-free; however, it incurs a higher per-iteration function evaluation cost.

4. Systems of Nonlinear Equations

Three-point methods have been extended to vector equations F(x)=0F(x) = 0, where xRnx \in \mathbb{R}^n. The scheme of (R. et al., 18 Jan 2026) generalizes high-order scalar schemes to vector-valued settings without requiring higher derivatives. Iterations involve repeated Jacobian solves, but attain sixth-order convergence: y(k)=x(k)F(x(k))1F(x(k)) z(k)=x(k)2(F(x(k))+F(y(k)))1F(x(k)) x(k+1)=z(k)[3F(y(k))F(x(k))]1[F(x(k))+F(y(k))]F(x(k))1F(z(k))\begin{aligned} y^{(k)} &= x^{(k)} - F'(x^{(k)})^{-1} F(x^{(k)}) \ z^{(k)} &= x^{(k)} - 2 \bigl(F'(x^{(k)}) + F'(y^{(k)})\bigr)^{-1} F(x^{(k)}) \ x^{(k+1)} &= z^{(k)} - [3F'(y^{(k)}) - F'(x^{(k)})]^{-1}[F'(x^{(k)}) + F'(y^{(k)})] F'(x^{(k)})^{-1} F(z^{(k)}) \end{aligned} Convergence analysis via Taylor expansion verifies order 6, with per-iteration cost dictated by function and Jacobian evaluations and linear system solves. The method is more efficient (in the Ostrowski sense) than competing fifth-order methods for large nn (R. et al., 18 Jan 2026).

5. Optimization and Fixed-Point Acceleration

Three-point strategies have been successfully applied in optimization and fixed-point iterations, enhancing convergence while maintaining low memory and computational overhead.

  • Three-point Barzilai–Borwein (TBB) for Unconstrained Optimization: The TBB method (Qingying et al., 2022) uses step-size choices based on least-squares fitting of three-point secant equations for the gradient, coupled with relaxed Armijo backtracking. It achieves global convergence (for uniformly continuous gradient), linear or superlinear convergence (with proper structure), and empirically outperforms classical two-point Barzilai–Borwein in large-scale quadratics and structured problems.
  • Stochastic Three Points (STP) for Zeroth-Order Optimization: STP (Bergou et al., 2019) is a randomized, derivative-free approach suitable for black-box smooth minimization. Each iteration, it selects the best among x,x±αsx, x \pm \alpha s (for random ss). For convex and nonsmooth problems, STP achieves the best known dependence on problem dimension nn and tolerance ε\varepsilon: O(nε2)O(n \varepsilon^{-2}) complexity for nonconvex, O(n/ε)O(n/\varepsilon) for convex problems.
  • Three-point Polynomial Accelerator (TPA) for Fixed-Point Maps: The TPA (Alemanno, 12 Nov 2025) augments fixed-point schemes by fitting the contraction factor from residual dynamics, forming a quadratic blend of the last three iterates to annihilate the slowest-decaying mode. This dramatically reduces the number of map evaluations needed to reach a target tolerance in linear and mildly nonlinear systems and outperforms Picard, SOR, and shallow Anderson acceleration, with minimal additional memory and computational cost.

6. Application-Specific Three-Point Methods

Three-point iterative techniques are critical in demanding engineering scenarios, notably high-precision solutions to the Colebrook equation for turbulent pipe friction. In (Praks et al., 2018), several three-point methods—Džunić–Petković–Petković, Bi–Ren–Wu, Sharma–Arora, etc.—achieve eighth-order convergence for the Colebrook equation,

F(x)=x+2log10[2.51x/Re+3.71/ϵ],F(x)=x+2\log_{10}[2.51x/Re+3.71/\epsilon^*],

with just two iterations typically reducing residuals beneath 10910^{-9} even with poor initialization. Decisions between methods can hinge on algebraic simplicity, robustness near parameter boundaries (e.g., low roughness), and whether analytic derivatives are available.

A summary table of major three-point schemes for the Colebrook equation:

Method Convergence Order Key Features
Džunić–Petković–Petković 8 Fewest iterations, robust
Bi–Ren–Wu 8 Bi-quadratic, robust
Sharma–Arora, Sharma–Sharma 8 Divided-difference structure
Neta, Chun–Neta 6 Lower algebraic cost
Jain (Steffensen-type) 4 Derivative-free

7. Multivalued Mappings and Banach Space Iterations

Three-point iterative processes extend to Banach space settings, particularly for multivalued mappings satisfying mild nonexpansivity conditions. The scheme of (Eslamian et al., 2011) iterates via convex combinations of images under three multivalued maps, with bounded errors and adaptive weights. Strong convergence theorems are established under uniform convexity, Suzuki's condition (C), and summability of error sequences. This approach unifies and generalizes a broad class of iterative methods for fixed-point and split-operator equations in analysis and applied mathematics.


Three-point iterative methods constitute a central class of advanced numerical algorithms across nonlinear equations, systems, optimization, and fixed-point problems. Their design exploits carefully crafted multistage updates—targeting error cancellation at increasingly higher orders—allowing them to achieve the theoretical optimal for order per function evaluation, adapt to challenges such as derivative unavailability or ill-conditioning, and deliver superior convergence in both local and global analyses. Empirical evidence and extensive theoretical work establish their status as the leading choice for high-accuracy, efficient solutions in diverse computational fields.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Three-Point Iterative Method.