Homework 1 solutions#
1. Roundoff#
We are considering the function:
where \(x > 0\).
Let’s define the analytically equivalent \(g(x)\) as:
We can also use a Taylor expansion for \(|x| \ll 1\), \(\sqrt{x^3 + 1} \approx 1 + \frac{1}{2} x^3\), giving
Let’s write functions for these and look at their behavior for small \(x\)
import numpy as np
def f(x):
return 1.0 / (np.sqrt(x**3 + 1.0) - 1.0)
def g(x):
return (np.sqrt(x**3 + 1.0) + 1.0) / x**3
def f_approx(x):
return 2.0 / x**3
for x in [1.e-4, 1.e-5, 1.e-6]:
print(f"{x:5.2e} {f(x):12.6g} {g(x):12.6g} {f_approx(x):12.6g}")
1.00e-04 1.99982e+12 2e+12 2e+12
1.00e-05 2.2518e+15 2e+15 2e+15
1.00e-06 inf 2e+18 2e+18
/tmp/ipykernel_4548/2416731082.py:2: RuntimeWarning: divide by zero encountered in scalar divide
return 1.0 / (np.sqrt(x**3 + 1.0) - 1.0)
We see that our original function, \(f(x)\), is quite inaccurate, while the analytically equivalent version has very good behavior. Finally the approximation works just fine for these values.
Note for large \(x\), our approximation will be bad, and \(f(x)\) will agree well with \(g(x)\)
for x in [1.e-1, 1.0, 1.e2, 1.e3, 1.e4]:
print(f"{x:5.2e} {f(x):12.6g} {g(x):12.6g} {f_approx(x):12.6g}")
1.00e-01 2000.5 2000.5 2000
1.00e+00 2.41421 2.41421 2
1.00e+02 0.001001 0.001001 2e-06
1.00e+03 3.16238e-05 3.16238e-05 2e-09
1.00e+04 1e-06 1e-06 2e-12
2. Finite Differences#
We want to create a one-sided first-derivative that uses the points \((x_i, f_i)\), \((x_{i+1}, f_{i+1})\), and \((x_{i+2}, f_{i+2})\).
We’ll assume that our grid is uniform, with spacing \(\Delta x\), so \(x_{i+1} = x_i + \Delta x\) and \(x_{i+2} = x_i + 2 \Delta x\).
Now we can Taylor expand:
Notice that if we construct \(f_{i+2} - 4 f_{i+1}\), then the \(f^{\prime\prime}_i\) terms cancel. Writing that out:
and solving for \(f^\prime_i\), we have:
def fprime_oneside(x0, f, dx):
return (-3 * f(x0) + 4 * f(x0 + dx) - f(x0 + 2*dx)) / (2 * dx)
def f(x):
return np.sin(x)
def fprime_analytic(x):
return np.cos(x)
x0 = 1.0
print(f"{'dx':^8} {'dfdx':^8} {'error':^9}")
for dx in [0.5, 0.25, 0.125, 0.0625, 0.03125]:
dfdx = fprime_oneside(x0, f, dx)
dfdx_true = fprime_analytic(x0)
err = np.abs(dfdx - dfdx_true)
print(f"{dx:8.5f} : {dfdx:8.5f} , {err:9.6f}")
dx dfdx error
0.50000 : 0.55627 , 0.015967
0.25000 : 0.54806 , 0.007759
0.12500 : 0.54269 , 0.002389
0.06250 : 0.54095 , 0.000651
0.03125 : 0.54047 , 0.000169
We see that as we cut \(\Delta x\) in half, the error drops by a factor of 4, as expected for a second-order accurate method.