1.4. Big O Notation#

The concept of Big \(\mathcal{O}\) provides a measure of the convergence rate and error of an algorithm. It is commonly used to describe the rate at which an error term diminishes. Consider the Taylor series expansion of \(e^{x}\):

\[\begin{equation*} e^{x} = 1+x+ \frac {x^{2}}{2!} + \frac {x^{3}}{3!} + \frac {x^{4}}{4!} +\ldots \end{equation*}\]

When \(x\) approaches zero, we can approximate \(e^{x}\) as:

\[\begin{equation*} e^{x} = 1+x+ \frac {x^{2}}{2!} + \frac {x^{3}}{3!} + \mathcal{O}(x^4) \end{equation*}\]

This implies that the absolute error between \(e^{x}\) and its approximation is approximately proportional to \(x^4\) as \(x\) gets closer to zero:

\[\begin{equation*} \left| e^{x} - (1+x+ \frac {x^{2}}{2!} + \frac {x^{3}}{3!}) \right| = \mathcal{O}(x^4) \end{equation*}\]

Another application of Big \(\mathcal{O}\) is in analyzing the efficiency of algorithms. For instance, consider the function \(f(x) = x^3 + 2,x^2 + x\). It has a Big \(\mathcal{O}\) growth rate of \(\mathcal{O}(x^3)\), indicating that its complexity increases no faster than cubic in relation to the input size. Conversely, it has a Big \(\mathcal{O}\) decline rate of \(\mathcal{O}(x)\), signifying that its complexity decreases no faster than linearly with the input size.