2.5. Acceleration of Convergence#
2.5.1. Aitken’s \(\Delta^2\) Method#
Assume that \(\{c_n\}_{n=0}^{\infty}\) converges to a point \(c\) linearly. Since the rate of convergence is linear,
It follows that,
Expanding both sides,
Therefore,
Forward difference: \(\Delta c_{n} = c_{n+1} - c_{n}\)
Backward difference: \(\nabla c_{n} = c_{n} - c_{n-1}\)
The following theorem is Theorem 2.14 from [Burden and Faires, 2005].
Theorem
Assume that \(\{c\}_{n}^{\infty}\) is a sequence that converges linearly to the limit \(c\) and that
The Aitken’s \(\Delta^2\) sequence \(\{\hat{c}\}_{n}^{\infty}\) converges to \(c\) faster than \(\{c\}_{n}^{\infty}\) in the sense that
2.5.2. Steffensen’s Method#
We can use Aitken’s \(\Delta^2\) method for finding fixed points and then increase the rate of convergence from linear to quadratic. This procedure is known as Steffensen’s Method.
We can accelerate the convergence of fixed-point iteration from linear to quadratic by a modification of Aitken’s \(\Delta^2\) method. This process is known as Steffensen’s method and it is slightly different from Aitken’s \(\Delta^2\) method. Aitken’s \(\Delta^2\) method generates the terms in order
Observe that every third term of the Steffensen sequence is generated by
and the rest of the points are generated by fixed-point iteration on the previous term.
import pandas as pd
import numpy as np
def Aitken_method(f, x0, TOL = 1e-4, Nmax = 100):
'''
Parameters
----------
f : function
DESCRIPTION.
x0 : float
DESCRIPTION. Initial estimate
TOL : TYPE, optional
DESCRIPTION. Tolerance (epsilon). The default is 1e-4.
Nmax : Int, optional
DESCRIPTION. Maximum number of iterations. The default is 100.
Returns
-------
a dataframe
'''
xn = []
xn.append(x0)
En = [0]
for n in range(0, Nmax-1):
x1 = f(x0)
x2 = f(x1)
denominator = (x2 - x1) - (x1 - x0)
if abs(denominator) < 10e-16:
print('Denominator is too small')
break
newx = x2 - ( (x2 - x1)**2 ) / denominator
xn.append(newx)
if abs(newx - x2) < TOL:
break
x0 = newx
En.extend(np.diff(xn))
return pd.DataFrame({'xn': xn, 'En': En})
function [xn, En] = Aitken_method(f, x0, TOL, Nmax)
%{
Parameters
----------
f : function
DESCRIPTION.
x0 : float
DESCRIPTION. Initial estimate
TOL : TYPE, optional
DESCRIPTION. Tolerance (epsilon). The default is 1e-4.
Nmax : Int, optional
DESCRIPTION. Maximum number of iterations. The default is 100.
Returns
-------
a dataframe
Example
f =@(x) (1 - 0.5*x*x)
x0 = 1
%}
switch nargin
case 3
Nmax = 100;
case 2
TOL = 1e-4; Nmax = 100;
end
xn = zeros(Nmax, 1);
xn(1) = x0;
En = zeros(Nmax, 1);
En(1) = 0;
for n = 2:(Nmax-1)
% x1
x1 = f(x0);
%
x2 = f(x1);
denominator = (x2 - x1) - (x1 - x0);
if abs(denominator) < 10e-16
disp('Denominator is too small')
break
end
newx = x2 - ( (x2 - x1)^2 ) / denominator;
En(n) = abs(newx - x2);
xn(n) = newx;
if abs(newx - x2) < TOL
xn = xn(1:n);
En = En(1:n);
break
end
x0 = newx;
end
Example: Find the fixed point of \(g(x) = (x^2 - 1)/3\) on \([-1,1]\) using the fixed-point method and Steffensen’s method with \(c_0 = 0\). The exact real root here is
Solution:
Fixed-Point method: We can generate the entries of series \(\{c_{0}, c_{1}, c_{2}, \ldots\}\) using \(c_{n} = g(c_{n-1})\) and \(c_{0} = 0\). We have,
# This part is used for producing tables and figures
import sys
sys.path.insert(0,'..')
import hd_tools as hd
from hd_RootFinding_Algorithms import Fixed_Point
import numpy as np
g = lambda x : (x**2 - 1)/3
data = Fixed_Point(g = g, x0 = 0, TOL=1e-6, Nmax=20)
display(data.style.format({'fxn': "{:.4e}"}))
hd.it_method_plot(data, title = """Aitken’s Method""", ycol = 'En', ylabel = 'En)', color = 'OrangeRed')
xn | En | |
---|---|---|
0 | 0.000000 | 1.000000 |
1 | -0.333333 | 0.333333 |
2 | -0.296296 | 0.037037 |
3 | -0.304070 | 0.007773 |
4 | -0.302514 | 0.001556 |
5 | -0.302828 | 0.000315 |
6 | -0.302765 | 0.000063 |
7 | -0.302778 | 0.000013 |
8 | -0.302775 | 0.000003 |
9 | -0.302776 | 0.000001 |
The iterations was stopped when \(E_n < 10^{-6}\).
Steffensen’s method:
In this method, Every 3rd term is approximated as follows, and the rest of the points are approximated through the fixed-point method.
Note that, for the first step, set \(c_{0}^{0} = c_0\) (here \(c_0 = 0\)), and to approximate, \(c_{0}^{1}\), we need two additional points, \(c_{1}^{0}\) and \(c_{2}^{0}\). These points can be approximated through the fixed-point as follows,
Now, the next approximation is done through \eqref{Steff_eq}:
The next two point needs to be found through the fixed point, thus,
Similarly,
Then, the next two points need to be found through the fixed point method,
Constituting this procedure, we can present this in the following table.
Furthermore, using Steffensen’s method:
from hd_RootFinding_Algorithms import Steffensen_method
g = lambda x : (x**2 - 1)/3
data = Steffensen_method(g = g, c0 = 0, TOL = 1e-6, Nmax = 100)
display(data.style.format({'fxn': "{:.4e}"}))
cn0 | cn1 | c2n | En | |
---|---|---|---|---|
0 | 0.000000 | -0.333333 | -0.296296 | 0.000000 |
1 | -0.300000 | -0.303333 | -0.302663 | 0.300000 |
2 | -0.302775 | -0.302776 | -0.302776 | 0.002775 |
3 | -0.302776 | nan | nan | 0.000000 |