Linear Interpolation

Linear Interpolation#

The simplest interpolation we can imagine is doing connect-the-dots, and simply connecting our data samples with straight lines.

example of linear interpolation

As we see, this only gets the correct values at the end points (for a general function), but it has the nice property that it does not introduce any new minima or maxima.

Given some N points sampling a function, \((x_i, f_i)\), the interpolant connecting the points \(i\) and \(i+1\) is:

\[f(x) = \frac{f_{i+1} - f_i}{x_{i+1} - x_i} (x - x_i) + f_i\]

for \(x_i \le x \le x_{i+1}\)

For N points, we have \(N-1\) interpolants.

Piecewise interpolation#

This is an example of piecewise interpolation. We don’t try to fit all of the points with a single polynomial, but instead fit a subset of points with a low-order polynomial (in this case, a line).

In-class example:

Let’s write a function to tabulate \(f(x) = x sin(x)\) on \([0, 5]\) and then use linear interpolation to get the function values.

How many function samples do we need to get a 1% error?

import numpy as np
def f(x):
    return x * np.sin(x)
def sample_function(func, npts, xmin=0.0, xmax=5.0):
    xx = np.linspace(xmin, xmax, npts)
    return xx, func(xx)
def interpolate(x0, xv, fv):
    
    # find first x[i] > x0
    idx = np.argwhere(xv > x0)[0][0]
    
    # we want to use this point and the one to the left
    # we'll shift idx to point to the left point
    idx = max(0, idx-1)
    
    slope = (fv[idx+1] - fv[idx]) / (xv[idx+1] - xv[idx])
    return slope * (x0 - xv[idx]) + fv[idx]

Let’s just look at \(x_0 = 1\) and try different number of points and measure the error

x0 = 1
npts = [5, 10, 20, 40, 80, 160]
for n in npts:
    xv, fv = sample_function(f, n)

    finterp = interpolate(x0, xv, fv)
    fexact = f(x0)
    err = np.abs(finterp - fexact)
    print(f"{n:4} points, error = {err}")
   5 points, error = 0.10751363454768981
  10 points, error = 0.013746014526716532
  20 points, error = 0.002193334982023787
  40 points, error = 0.00041644795293660497
  80 points, error = 8.898902652587637e-05
 160 points, error = 2.043835377496528e-05

In this case, after 20 points, our error is < 1%. We also see that we appear to converge slightly better than 2nd order