Homework 1 solutions#
1#
We want to derive a difference approximation to the second derivative. Let’s look at the Taylor expansions of \(f(x\pm h)\):
Now, let’s eliminate the first derivative by combining these as:
Notice that the third-derivative term also cancels.
We can now solve for the second derivative:
We expect this to be second-order accurate.
Let’s try this out. We’ll run on \(f(x) = \sin(x)\) which has the second derivative, \(f^{\prime\prime} = -\sin(x)\).
import numpy as np
def f(x):
return np.sin(x)
def d2fdx2(x):
return -np.sin(x)
def second_deriv_difference(f, x, h):
return (f(x + h) - 2 * f(x) + f(x - h)) / h**2
Now let’s try a bunch of different values of \(h\) at \(x_0 = 1\)
x0 = 1.0
for h in [0.1, 0.05, 0.025, 0.0125]:
print(f"{h:8} : {np.abs(second_deriv_difference(f, x0, h) - d2fdx2(x0))}")
0.1 : 0.000700992120478694
0.05 : 0.00017529184697717692
0.025 : 4.382570074124015e-05
0.0125 : 1.0956596325217838e-05
This shows the error (compared to the analytic solution) for different values of \(h\). As we see, when we cut \(h\) by a factor of 2, the error goes down by a factor of 4—this is the second-order convergence we expect!
2.#
We are looking at
As the problem states, in the limit of \(x\rightarrow 0\), this reduces to
Now, consider the transformation suggested in the problem:
The advantage of this last form is that there are no subtractions of two very close numbers.
Let’s compare the accuracy of these 3 forms:
def f1(x):
return np.sqrt(x * x + 1) - 1
def f2(x):
return 0.5 * x * x
def f3(x):
return x * x / (np.sqrt(x * x + 1) + 1)
for x in [1.e-6, 1.e-7, 1.e-8]:
print(f"x = {x} : original form: {f1(x):13.8g}, limit: {f2(x):13.8g}, new form: {f3(x):13.8g}")
x = 1e-06 : original form: 5.0004445e-13, limit: 5e-13, new form: 5e-13
x = 1e-07 : original form: 4.8849813e-15, limit: 5e-15, new form: 5e-15
x = 1e-08 : original form: 0, limit: 5e-17, new form: 5e-17
We see that the roundoff error in the original form is very large. The new form of the expression agrees very well with the limit.