3.3. Interpolation and Lagrange polynomial#

3.3.1. Lagrange Interpolating Polynomials#

For a distinct set of \(n + 1\) data points (no two \(x_{j}\) are the same)

\[\begin{equation*} (x_{0},y_{0}),~(x_{1},y_{1}),~(x_{2},y_{2})\ldots ,~(x_{n-1},y_{n-1}),~(x_{n},y_{n}), \end{equation*}\]

the Lagrange basis polynomials are defined as follows,

(3.15)#\[\begin{equation} { L_{j}(x)=\prod _{\begin{smallmatrix}0\leq m\leq n\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},} \end{equation}\]

where \(y_{j} = f(x_{j})\) and \(0\leq j\leq n\).

Furthermore, the interpolation polynomial in the Lagrange form is a linear combination of these Lagrange basis polynomials

(3.16)#\[{L(x)=\sum _{j=0}^{n}y_{j}L_{j}(x) = y_{0} L_{0}(x) + y_{1} L_{1}(x) + \ldots + y_{n} L_{n}(x)}\]

Theorem

Assuming that \(x_0\), \(x_1\), … , \(x_{n}\) are \(n+1\) distinct numbers from \([a, b]\) and \(f \in C^{n+1} [a,b]\). Then, there exists a number \(\xi(x_{j}) \in (x_{0},x_{1})\bigcup \ldots \bigcup (x_{n-1},x_{n})\) such that

(3.17)#\[f(x) = L(x) + \frac{f^{n+1} (\xi(x)) }{ (n+1)!} (x- x_0) (x - x_1) \ldots (x - x_n),\quad x\in [a,b],\]

where \(L(x)\) is the interpolating polynomial provided in equation (3.16).

Proof:

Note that \(f(x_{j}) = L(x_{j})\) when \(0\leq j \leq n\). Therefore, when \(x = x_{j}\) when \(0\leq j \leq n\) for any \(\xi(x_{j}) \in (x_{0},x_{1})\bigcup \ldots \bigcup (x_{n-1},x_{n})\) yields equation (3.17).

Now, when \(x \neq x_{j}\) for \(j=0,1,\ldots, n\), define a function \(g(t)\) for \(t\in[a,b]\) as follows,

\[\begin{align*} g(t) &= f(t) - L(t) - [f(x) - L(x)]\frac{(t- x_0) (t - x_1) \ldots (t - x_n)}{(x- x_0) (x - x_1) \ldots (x - x_n)}\\ & = f(t) - L(t) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}. \end{align*}\]

Since \(f \in C^{n+1}[a, b]\), and \(P \in C^{\infty}[a, b]\), then \(g \in C^{n+1}[a, b]\). Now, for \(t = x_{k}\) for \(k=0,1,\ldots, n\), we have

\[\begin{align*} g(x_{k}) & = f(x_{k}) - L(x_{k}) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(x_{k} - x_j)}{(x - x_j)}\\ & = 0 - [f(x) - L(x)].0 = 0. \end{align*}\]

Moreover, when \(t = x\), we have,

\[\begin{align*} g(x) &= f(x) - L(x) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(x - x_j)}{(x - x_j)}\\ & = f(x) - L(x) - [f(x) - L(x)] = 0. \end{align*}\]

Therefore, \(g \in C^{n+1}[a, b]\) and \(g\) is zero at \(n+2\) distinct points \(x\), \(x_0\), \(x_1\), … , \(x_n\). By Generalized Rolle’s Theorem, there is a number \(\xi\in (a, b)\) such that

\[\begin{align*} g^{(n+1)}(\xi) = 0. \end{align*}\]

It follows that

(3.18)#\[0 = g^{(n+1)}(\xi) = f^{(n+1)}(\xi) - P^{(n+1)}(\xi) - [f(x) - L(x)]\frac{d^{n+1}}{dt^{n+1}} \left[\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}\right]_{t = \xi}\]

On the other hand, \(\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}\) is a \(n+1\) degree polynomial and its first \((n+1)\) are equal to zero. We have,

\[\begin{align*} \prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)} = \left[\frac{1}{\prod _{j = 0 }^{n}(x-x_{j}} \right] + \text{lower-degree- terms.} \end{align*}\]

The derivative of those lower-degree terms would be zero, and

(3.19)#\[\frac{d^{n+1}}{dt^{n+1}}\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)} = \frac{(n+1)!}{\prod _{j = 0 }^{n}(x-x_{j}}.\]
\[\begin{align*} 0 = g^{(n+1)}(\xi) = f^{(n+1)}(\xi) -0 - [f(x) - L(x)] \frac{(n+1)!}{\prod _{j = 0 }^{n}(x-x_{j}} \end{align*}\]

Solving the above equation for \(f(x\), we have,

\[\begin{align*} f(x) = L(x) + \frac{f^{n+1} (\xi(x)) }{ (n+1)!} \prod _{j = 0 }^{n}(x-x_{j}. \end{align*}\]

Corollary

In addition to assumptions provided in Theorem \ref{Lagrange_Error_theorem}, assume that

(3.20)#\[\begin{equation} \left| f^{(n+1)} (\xi(x)) \right| \leq M \end{equation}\]

for some values of \(M\) and all points \(\xi(x) \in (x_{0},~x_{1})\bigcup \ldots \bigcup (x_{n-1},~x_{n})\), then

(3.21)#\[\begin{align}\label{Lagrange_ErrorBound} |f(x) - L(x)| &= \frac{\left| f^{n+1} (\xi(x)) \right|}{(n+1)!} \left| (x-x_{0})(x-x_{1})\ldots(x-x_{n}) \right| \notag\\ & \leq \frac{M}{(n+1)!} \left| (x-x_{0})(x-x_{1})\ldots(x-x_{n}) \right|. \end{align}\]
import numpy as np

def LagrangePolyCoeff (x ,i , xn):
    '''
    Parameters
    ----------
    x : float
        DESCRIPTION. point x
    i : int
        DESCRIPTION. index of L_{i}
    xn : list/array
        DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn

    Returns
    -------
    Lj : TYPE
        DESCRIPTION.

    '''
    Lj=1
    for j in range (len(xn)):
        if i!=j:
            Lj*=( x-xn[j])/( xn[i]-xn[j])
    return Lj

def LagrangePoly(x , xn , yn ):
    '''
    Parameters
    ----------
    x : float
        DESCRIPTION. point x
    xn : list/array
        DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn
    yn : list/array
        DESCRIPTION. a list/array consisting points
        y0 = f(x0), y1 = f(x1),... ,yn = f(xn)
    Returns
    -------
    L : float
        DESCRIPTION. L(x)

    '''
    
    LagrangePoly = np.array ([LagrangePolyCoeff(x ,i , xn ) for i in range (len(xn))])
    L = np.dot( yn , LagrangePoly)
    return L
function [Lj] = LagrangePolyCoeff(x ,i , xn)
%{
Parameters
----------
x : float
    DESCRIPTION. point x
i : int
    DESCRIPTION. index of L_{i}
xn : list/array
    DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn

Returns
-------
Lj : TYPE
    DESCRIPTION.

%}
Lj=1;
for j =1:length(xn)
    if i ~= j
        Lj = Lj .*( x - xn(j))/( xn(i) - xn(j));
    end
end
    

function [L] = LagrangePoly(x , xn , yn)
%{
Parameters
----------
x : float
    DESCRIPTION. point x
xn : list/array
    DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn
yn : list/array
    DESCRIPTION. a list/array consisting points
    y0 = f(x0), y1 = f(x1),... ,yn = f(xn)
Returns
-------
L : float
    DESCRIPTION. L(x)
Example:
xn = [1 ,2 ,3 ,4 ,5 , 6];
yn = [-3 ,0 ,-1 ,2 ,1 , 4];
x = linspace(min(xn)-1 , max(xn)+1 , 100);
%}
LagrangePoly = zeros(length(xn), length(x));

for i =1:length(xn)
    LagrangePoly(i,:) = LagrangePolyCoeff(x, i , xn );
end

L = 0.*x;
for j =1:length(x)
    L(j) = dot(yn, LagrangePoly(:,j));
end

Example: Consider the following data points

\[\begin{align*} \{(1,-3),~(2,0),~(3,-1),~(4,2),~(5,1),~(6,4)\} \end{align*}\]

and construct an interpolating polynomial using Lagrange polynomial and all of this data.

Solution:

Calculating \(L_{j}\)s with \(n = 3\) for \(j=0,1,2,3\):

\[\begin{equation*} { L_{j}(x)={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},} \end{equation*}\]
\[\begin{align*} L_{0}(x)&= \frac {(x-x_{1})}{(x_{0}-x_{1})}\frac {(x-x_{2})}{(x_{0}-x_{2})} \frac {(x-x_{3})}{(x_{0}-x_{3})}= \frac {(x- 2)}{(1-2)}\frac {(x- 3)}{(1-3)}\frac {(x- 4)}{(1-4)}\\ &=-\frac{\left(x-2\right)\,\left(x-3\right)\,\left(x-4\right)}{6} = -\frac{x^3}{6}+\frac{3\,x^2}{2}-\frac{13\,x}{3}+4. \end{align*}\]

Similarly, we can calculate all \(L_{j}\)s:

\[\begin{align*} L_{0}(x)&=-\frac{\left(x-2\right)\,\left(x-3\right)\,\left(x-4\right)}{6} = -\frac{x^3}{6}+\frac{3\,x^2}{2}-\frac{13\,x}{3}+4,\\ L_{1}(x)&=\frac{\left(x-1\right)\,\left(x-3\right)\,\left(x-4\right)}{2}= \frac{x^3}{2}-4\,x^2+\frac{19\,x}{2}-6,\\ L_{2}(x)& = -\frac{\left(x-1\right)\,\left(x-2\right)\,\left(x-4\right)}{2} =-\frac{x^3}{2}+\frac{7\,x^2}{2}-7\,x+4,\\ L_{3}(x)&=\frac{\left(x-1\right)\,\left(x-2\right)\,\left(x-3\right)}{6}= \frac{x^3}{6}-x^2+\frac{11\,x}{6}-1,\\ \end{align*}\]

Moreover,

\[\begin{equation*} L(x)= y_{0} L_{0}(x) + y_{1} L_{1}(x) + y_{2} L_{2}(x) + y_{3} L_{3}(x)= \frac{4\,x^3}{3}-10\,x^2+\frac{71\,x}{3}-18. \end{equation*}\]
# This part is used for producing tables and figures
import sys
sys.path.insert(0,'..')
import hd_tools as hd
import numpy as np
from hd_Interpolation_Algorithms import LagrangePoly
# A set of distinct points
xn = np.array ([1 ,2 ,3 ,4 ,5 , 6])
yn = np.array ([-3 ,0 ,-1 ,2 ,1 , 4])
x = np.linspace(xn.min()-1 , xn.max()+1 , 100)
y = LagrangePoly(x , xn , yn )

# Plots
hd.interpolation_method_plot(xn, yn, x, y, title = 'Lagrange Interpolating Polynomials')
Loading BokehJS ...

Example: Assume that \(x_{0}<x_{1}\) are distinct numbers in \([a,~b]\) and \(f \in C^{2} [a,~b]\). Also, assume that Lagrange the interpolation \(L(x)\) only comprises two data points \(\{(x_{0},y_{0}),~(x_{1},y_{1})\}\). Then, demonstrate the following using the error formula for the Lagrange interpolation:

(3.22)#\[\begin{equation} |f(x) - L(x)| \leq \frac{h^2}{8}M \end{equation}\]

where \( \displaystyle{M = \max_{x_0\leq x\leq x_1}} |f''(x)|\) and \(h = x_1 - x_0\).

Solution: The Lagrange interpolating polynomial using two data points \(\{(x_{0},y_{0}),~(x_{1},y_{1})\}\):

(3.23)#\[\begin{equation} L(x) = y_{0}L_{0}(x) + y_{1}L_{1}(x) =\frac{(x - x_{1})}{(x_{0} - x_{1})} y_{0} + \frac{(x - x_{0})}{(x_{1} - x_{0})} y_{1}. \end{equation}\]

The error formula \eqref{Lagrange_Error_theorem} is updated by utilizing the provided data point.

(3.24)#\[\begin{equation} f(x) - L(x) = \frac{(x - x_{0})(x - x_{1})}{2!} f''(\xi(x)), \end{equation}\]

where \(\xi(x)\) is a unknown point between \(x_{0}\) and \(x_{1}\). Hence

(3.25)#\[\begin{equation} |f(x) - L(x)| = \left|\frac{(x - x_{0})(x - x_{1})}{2!}\right| \left| f''(\xi(x)) \right|. \end{equation}\]

Because \(\xi(x)\) is unknown, the value of \(f''(\xi(x))\) cannot be calculated precisely. We may, however, limit the error by finding the largest possible value for \(|f''(\xi(x))|\). Let

(3.26)#\[\begin{equation} M = \max_{x_0\leq x\leq x_1}~|f''(x)|. \end{equation}\]

Then, the bound of \(|f''(x)|\) on \([x_{0},~x_{1}] \) can be obtained as follows,

(3.27)#\[\begin{equation}\label{eq.3.35} |f''(\xi(x))| \leq M. \end{equation}\]

Therefore,

(3.28)#\[\begin{equation} |f(x) - L(x)| \leq \frac{M}{2} |(x - x_{0})(x - x_{1})|. \end{equation}\]
Note that the inequality can be also concluded from the last Corollary.

Observe that the maximum of the function \(g(x) = (x - x_{0})(x - x_{1})\) in \([x_{0},~x_{1}]\) appears at the critical point \(x = \dfrac{(x_{0} + x_{1})}{2}\) as

(3.29)#\[\begin{equation} g'(x) = (x - x_{0}) + (x - x_{1}), \end{equation}\]

and it follows from setting \(g'(x) = 0\) that

(3.30)#\[\begin{equation} g'(x) = 0 \quad \Rightarrow \quad (x - x_{0}) + (x - x_{1}) = 0, \quad \Rightarrow \quad x = \dfrac{(x_{0} + x_{1})}{2}. \end{equation}\]

Thus the maximum of \(g\) on \((x_{0},~x_{1})\) is \(|(x - x_{0})(x - x_{1})| = (x_{1} - x_{0})^2/4\). Therefore,

(3.31)#\[\begin{equation} M = \max_{x_0\leq x\leq x_1}~|f''(x)| = |(x - x_{0})(x - x_{1})| = \frac{(x_{1} - x_{0})^2}{4}. \end{equation}\]

As a result, for each \(x \in [x_{0},~x_{1}]\), we obtain

(3.32)#\[\begin{equation} |f(x) - L(x)| \leq \frac{(x_{1} - x_{0})^2}{8}M, \end{equation}\]

Now, since \(h = x_1 - x_0\),

(3.33)#\[\begin{equation} |f(x) - L| \leq \frac{h^2}{8}M. \end{equation}\]

Example: Consider the following table having the data for \(f(x) = e^{x} + \sin(2x)\) on \([0.1,~0.5]\):

\(x_i\)

0.1000

0.3000

0.5000

\(y_i\)

1.3038

1.9145

2.4902

  • a. Use Lagrange polynomials to interpolate the points.

  • b. Find the approximation of \(f(0.4)\) using the Lagrange interpolated polynomial and estimate an error bound and absolute error for the approximation.

Solution:

\begin{enumerate}[label=\alph*), labelindent=0pt] a. Calculating \(L_{j}\)s with \(n = 2\) for \(j=0,1,2\):

\[\begin{equation*} { L_{j}(x)={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},} \end{equation*}\]
\[\begin{align*} L_{0}(x)&= \frac {(x-x_{1})}{(x_{0}-x_{1})}\frac {(x-x_{2})}{(x_{0}-x_{2})} = \frac {(x-0.3)}{(0.1-0.3)}\frac {(x-0.5)}{(0.1-0.5)} \\ & = \frac{5\,\left(2\,x-1\right)\,\left(10\,x-3\right)}{8} = \frac{25\,x^2}{2}-10\,x+\frac{15}{8},\\ L_{1}(x)&= \frac {(x-x_{0})}{(x_{1}-x_{0})}\frac {(x-x_{2})}{(x_{1}-x_{2})} = \frac {(x-0.1)}{(0.3-0.1)}\frac {(x-0.5)}{(0.1-0.3)} \\ &= -\frac{5\,\left(2\,x-1\right)\,\left(10\,x-1\right)}{4} = -25\,x^2+15\,x-\frac{5}{4},\\ L_{2}(x)&= \frac {(x-x_{0})}{(x_{2}-x_{0})}\frac {(x-x_{1})}{(x_{2}-x_{1})} = \frac {(x-0.1)}{(0.5-0.1)}\frac {(x-0.3)}{(0.5-0.3)} \\ & =\frac{\left(10\,x-1\right)\,\left(10\,x-3\right)}{8} = \frac{25\,x^2}{2}-5\,x+\frac{3}{8}. \end{align*}\]

Observe that in the above computations, we multiplied both numerator and denominator by 10 for convenience. Moreover,

\[\begin{equation*} L(x)= y_{0} L_{0}(x) + y_{1} L_{1}(x) + y_{2} L_{2}(x) = -0.437\,x^2+3.228\,x+0.985. \end{equation*}\]

b. First,

\[\begin{equation*} L(0.4) = 2.206718, \end{equation*}\]

while \(f(0.4) = 2.209181\). Therefore, the absolute error here is

(3.34)#\[\begin{align} |L(0.4) - f(0.4)| = 2.462763\times 10^{-03}. \end{align}\]

Now to compute an error bound of the approximation, we use the following formula

(3.35)#\[\begin{align} |f(x) - L(x)| &= \frac{|f^{(3)}(\xi)|}{3!} \left| (x - 0.1) (x - 0.3) (x - 0.5) \right| \notag\\ & \leq \frac{M}{3!} \left| (x - 0.1) (x - 0.3) (x - 0.5) \right|. \end{align}\]

Taking the third derivative of the given function, we get

\[\begin{align*} f'(x) & = 2\,\cos\left(2\,x\right)+{\mathrm{e}}^x,\\ f''(x) & = {\mathrm{e}}^x-4\,\sin\left(2\,x\right),\\ f'''(x) & = {\mathrm{e}}^x-8\,\cos\left(2\,x\right). \end{align*}\]
../_images/fig3_06.jpg

As can be seen \(f'''(x)\) is an increasing function when \(x \in (0.1, 0.5)\). Thus

\[\begin{equation*} |f^{(3)}(\xi(x))| = |{\mathrm{e}}^x-8\,\cos\left(2\,x\right)|,\qquad \xi(x) \in (0.1, 0.5), \end{equation*}\]

takes its maximum at \(x = 0.1\), and

\[\begin{equation*} M = \max_{0.1 \leq x \leq 0.5} |f^{(3)}(x)| = \left|{\mathrm{e}}^{0.1}-8\,\cos\left(2\,(0.1)\right)\right| = 6.7354. \end{equation*}\]

Therefore, the error bound at \(x = 0.4\):

(3.36)#\[\begin{equation} |f(0.4) - L(0.4)| \leq \frac{6.7354}{3!} \left| (0.4 - 0.1) (0.4 - 0.3) (0.4 - 0.5) \right| = 1.683840 \times 10^{-02}. \end{equation}\]

As can be seen, the maximum estimate error is larger than the actual absolute error at \(0.4\) as the error bound needs to highlight the maximum expected error!