Department of Applied Mathematics and Theoretical Physics

Numerical Analysis Course

From Polynomial Interpolation to Cryptography

Some extensions to the material in lectures on polynomial interpolation including example sheets and beyond.



In lectures you've seen polynomial interpolation from (essentially) one point of view. In what follows, it is helpful to cast polynomial interpolation in the framework of matrices and linear equations.

In standard polynomial interpolation of the function $f$ we seek coefficients $a_0$, $a_1$, ... $a_n$ for the polynomial $p(x)=a_0+a_1 x + ... + a_n x^n$ such that for some set of pairwise distinct real numbers ${x_i}$, $i=0,...,n$ the conditions



Interpolation in the style of Vectors and Matrices

The above conditions can be written as a set of $n+1$ linear equations in the $n+1$ unknowns $a_0, a_1, ... , a_n$; we can write this in matrix form:

$$\left( \begin{array}{ccccc}
1 & x_0 & x_0^2 & ... & x_0^n \\
1 & x_1 & x_1^2 & ... & x_1^n \\
&&&\vdots& \\
1 & x_n & x_n^2 & ... & x_n^n \\\end{array}\right )
\left(\begin{array}{c}  a_0 \\ a_1 \\ \vdots \\ a_n \end{array} \right )
=\left(\begin{array}{c}  f_0 \\ f_1 \\ \vdots \\ f_n \end{array} \right

The matrix on the LHS has a form that occurs so frequently that it is given a special name: it is a Vandermonde matrix.

It can be shown that because the $x_i$ are pairwise distinct the determinant of the matrix is nonzero, hence we get a unique solution to the interpolation problem. No surprises here, this was proved (from a different point of view) in lectures!

In principle, once we have formulated the matrix form for a (well-posed) interpolation problem, we simply have to apply standard methods to solve the linear system of equations. In MATLAB this is especially easy to do:

B=[1 -1 1;
   1  0 0;
   1  1 1;];
a =


(This interpolates the values $f(-1)=0$, $f(0)=1$ and $f(1)=0$ - the result corresponds to the polynomial $p(x)=1-x^2$)


  1. There may be less general (but more efficient) approaches.
  2. Different ways of phrasing the problem may lead to different ('better') matrices. For example, if we use the Newton form of the interpolating polynomial the matrix becomes upper triangular (although it is no longer Vandermonde) - try this.

Higher-order ('Osculatory') Interpolation (cf. Example Sheet 1)

As on Example Sheet 1, we may be required to interpolate the derivative of the function $f$ at some point, as well as the function value itself. Suppose this point is $x_0$ and that we no longer interpolate $f$ at $x_1$. Then we have:

$$\left( \begin{array}{ccccc}
1 & x_0 & x_0^2 & ... & x_0^n \\
0 & 1 & 2x_0 & ... & n x_0^{n-1} \\
1 & x_2 & x_2^2 & ... & x_2^n \\
&&&\vdots& \\
1 & x_n & x_n^2 & ... & x_n^n \\\end{array}\right )
\left(\begin{array}{c}  a_0 \\ a_1 \\ \vdots \\ a_n \end{array} \right )
=\left(\begin{array}{c}  f_0 \\ f^\prime_0 \\ f_2 \\ \vdots \\ f_n \end{array} \right

Can you see where the new second row comes from? The condition


is, more explicitly

$$ a_0 \cdot 0 + a_1 \cdot 1 x_0^{1-1} + a_2 \cdot 2x_0^{2-1} + ... + a_n
\cdot n x_0^{n-1} = f^\prime_0$$

It is not hard to see how this can be extended further to higher-order derivatives and more points, or even to 'interpolating' features of the function $f$ other than derivatives. But:


  1. In the example above, we no longer interpolate $f$ at $x_1$ - otherwise the system becomes overdetermined!
  2. More generally: to obtain the right (i.e. same) number of equations and unknowns, you may have to increase the degree of the interpolating polynomial by adding more coefficients.
  3. The conditions you impose on the interpolating polynomial must be independent - that is, they must lead to linearly independent rows in the resulting linear system.


We sketch below two applications of polynomial interpolation.

Richardson Extrapolation [sic!] and Romberg integration

Richardson Extrapolation is a general approach for using successive approximation values $f_j$, $j=1,...,n$ generated by some convergent scheme (e.g. root finding) to produce a new (and hopefully improved!) estimate of the limit.

We interpolate $f_j$ by a polynomial $p_j$ in inverse powers of $j$:

$$p_i=a_0 + a_1 j^{-1} + a_2 j^{-2} + ... + a_n j^{-n}$$

Having calculated the $a_k$ (which can be done by divided differences) we take $a_0$ as our new estimate. The intuition is that $p_j \rightarrow a_0$ as $j \rightarrow \infty$, and $j \rightarrow \infty$ represents passing to the limit of successive approximations.

Applying this approach to numerical quadrature (with either the trapezoidal or rectangle rules) gives the method of Romberg integration; see for more details.

Shamir's Secret-Sharing Scheme

In cryptography, this is a method for distributing a secret number among N people such that no fewer than K of them working together can reconstruct the secret number.

The details are straightforward but somewhat involved; the scheme can be understood as based on polynomial interpolation in a finite field. For more details see