4th-order Runge-Kutta#

Now we consider a 4th order accurate method—4th order Runge-Kutta (RK4). For many problems, this method will work very well and as a result, RK4 is widely used.

We’ll consider a general system of first order differential equations:

\[\dot{\bf y} = {\bf f}(t, {\bf y})\]

The advance begins by estimating the derivatives (righthand side or slope) at time \(t^n\). We’ll call this \({\bf k}_1\).

\[{\bf k}_1 = {\bf f}(t^n, {\bf y}^n)\]

4th order Runge-Kutta k1

We then follow the slope \({\bf k}_1\) to the midpoint in time, \(t^{n+1/2}\) and evaluate the slope there. We call the new slope \({\bf k}_2\).

\[{\bf k}_2 = {\bf f}(t^n + \tau/2, {\bf y}^n + (\tau/2) {\bf k}_1)\]

4th order Runge-Kutta k2

We then go back to the start, but this time follow the new slope, \({\bf k}_2\) to the midpoint in time, \(t^{n+1/2}\). We again evaluate the slope here, and call it \({\bf k}_3\).

\[{\bf k}_3 = {\bf f}(t^n + \tau/2, {\bf y}^n + (\tau/2) {\bf k}_2)\]

4th order Runge-Kutta k3

Finally, we go back to the start and follow \({\bf k}_3\) for the full timestep, to \(t^{n+1}\) and evaluate the slope there, calling it \({\bf k}_4\).

\[{\bf k}_4 = {\bf f}(t^n + \tau, {\bf y}^n + \tau {\bf k}_3)\]

4th order Runge-Kutta k4

We then get the updated solution using a linear combination of the 4 slopes:

\[{\bf y}^{n+1} = {\bf y}^n + \frac{\tau}{6} ({\bf k}_1 + 2 {\bf k}_2 + 2 {\bf k}_3 + {\bf k}_4)\]

4th order Runge-Kutta final

Note the similarity of RK4 to Simpson’s rule for integration.

Implementation#

We’ll again use the orbit_util.py module to get access to the common functions from the other integrators

import numpy as np
import matplotlib.pyplot as plt
import orbit_util as ou

Here’s our implementation

def int_rk4(state0, tau, T):

    times = []
    history = []
    
    # initialize time
    t = 0
    
    # store the initial conditions
    times.append(t)
    history.append(state0)
    
    # main timestep loop
    while t < T:
        
        state_old = history[-1]
        
        # make sure that the last step does not take us past T
        if t + tau > T:
            tau = T - t

        # get the RHS
        k1 = ou.rhs(state_old)
         
        state_tmp = state_old + 0.5 * tau * k1
        k2 = ou.rhs(state_tmp)
        
        state_tmp = state_old + 0.5 * tau * k2
        k3 = ou.rhs(state_tmp)
        
        state_tmp = state_old + tau * k3
        k4 = ou.rhs(state_tmp)
        
        # do the final update
        state_new = state_old + tau / 6.0 * (k1 + 2*k2 + 2*k3 + k4)
        t += tau
        
        # store the state
        times.append(t)
        history.append(state_new)
        
    return times, history

Let’s try it out now.

state0 = ou.initial_conditions()

tau = 1.0/12.0

times, history = int_rk4(state0, tau, 1)

fig = ou.plot(history)
../_images/1f48b793037d5cebe735c1c375e94987e31c41e1674d9834f0d7fa462bd615ab.png

This already looks pretty good, even for this very coarse timestep.

Convergence#

We can look at the convergence of RK4

taus = [0.1, 0.05, 0.025]

for n, tau in enumerate(taus):
    times, history = int_rk4(state0, tau, 1)
    
    if n == 0:
        fig = ou.plot(history, label=rf"$\tau = {tau:6.4f}$")
    else:
        ou.plot(history, ax=fig.gca(), label=rf"$\tau = {tau:6.4f}$")

fig.gca().legend()
<matplotlib.legend.Legend at 0x7fe114cb1b10>
../_images/9cf84073e94d194c0588f6dbc0ccaaac10dcb5a605c3400ce24ec85e6f5ef175.png
for tau in [0.1, 0.05, 0.025, 0.0125, 0.00625]:
    times, history = int_rk4(state0, tau, 1)
    print(f"{tau:8} : {ou.error_radius(history):10.5g} {ou.error_position(history):10.5g}")
     0.1 :   0.020244     0.1074
    0.05 : 0.00054733  0.0039053
   0.025 : 1.6779e-05 0.00016588
  0.0125 : 5.2225e-07 7.9308e-06
 0.00625 : 1.6305e-08 4.1917e-07

This is clearly converging faster than 2nd order. 4th order means that as we cut the timestep in half, the error should go down by \(2^4\) or 16.

Timestepping#

In the above examples, we always kept the timestep \(\tau\) fixed, but in general, finding the solution to a system of ODEs might have portions in time where the solution is changing rapidly and a smaller \(\tau\) would be needed. Likewise, is the solution is changing slowly, we can use a larger timestep.

Most ODE libraries use some form of local error estimation to measure how the large the error is in the solution and adjust the timestep (up or down) to achieve a desired accuracy.

To see why this might be needed, consider an elliptical orbit.

elliptical orbit

The initial conditions for a planet at perihelion (on the +y axis) are:

\[x = 0\]
\[y = a (1 - e)\]
\[v_x = -\sqrt{\frac{GM_\star}{a} \frac{1+e}{1-e}}\]
\[v_y = 0\]

where \(a\) is the length of the semi-major axis and \(0 \le e < 1\) is the eccentricity. For an eccentric orbit, the velocity changes throughout the orbit, so when the planet is at perihelion, the solution is changing rapidly.

Exercise

Integrate an orbit with a large eccentricity (like \(e = 0.8\)). Pick a timestep such that visually the orbit looks okay. Now make a plot of the total energy per unit mass, \(\mathcal{E}\) vs time:

\[\mathcal{E} = \frac{1}{2} |{\bf v}|^2 - \frac{GM_\star}{r}\]

At what point in the orbit is the energy conservation the worst?

First the initial conditions

a = 1.0
e = 0.6

x0 = 0
y0 = a * (1 - e)
u0 = -np.sqrt(ou.GM / a * (1 + e)/(1 - e))
v0 = 0

state0 = ou.OrbitState(x0, y0, u0, v0)

Now we can integrate

tau = 0.025
T = 1

times, history = int_rk4(state0, tau, 1)

fig = ou.plot(history)
../_images/bd019e93906b58a29667696d08af4bb1661aec4cd3f031fad1b091a0bc436a02.png

We can compute the energy / unit mass now.

E = [0.5 * (state.u**2 + state.v**2) - ou.GM / np.sqrt(state.x**2 + state.y**2)
     for state in history]

fig = plt.figure()
ax = fig.add_subplot(111)

ax.plot(times, E/E[0])
ax.set_xlabel("t [yr]")
ax.set_ylabel("$\mathcal{E}/\mathcal{E}(t=0)$")
Text(0, 0.5, '$\\mathcal{E}/\\mathcal{E}(t=0)$')
../_images/6b3de4bbe62a714c41da10c44cfc02f33c3ec52aa4307c5ca57bb3e8ed0b434b.png

Notice that the energy conservation is not good, and the conservation is worst at perihelion when the solution is changing fastest.

C++ implementation#

Here’s a C++ implementation of the 4th order Runge Kutta that follows the same layout as the python version here:

zingale/computational_astrophysics