4th-order Runge-Kutta

Hide code cell content
import numpy as np
import matplotlib.pyplot as plt

4th-order Runge-Kutta#

Now we consider a 4th order accurate method—4th order Runge-Kutta (RK4). For many problems, this method will work very well and as a result, RK4 is widely used.

We’ll consider a general system of first order differential equations:

\[\dot{\bf y} = {\bf f}(t, {\bf y})\]

The advance begins by estimating the derivatives (righthand side or slope) at time \(t^n\). We’ll call this \({\bf k}_1\).

\[{\bf k}_1 = {\bf f}(t^n, {\bf y}^n)\]

4th order Runge-Kutta k1

We then follow the slope \({\bf k}_1\) to the midpoint in time, \(t^{n+1/2}\) and evaluate the slope there. We call the new slope \({\bf k}_2\).

\[{\bf k}_2 = {\bf f}(t^n + \tau/2, {\bf y}^n + (\tau/2) {\bf k}_1)\]

4th order Runge-Kutta k2

We then go back to the start, but this time follow the new slope, \({\bf k}_2\) to the midpoint in time, \(t^{n+1/2}\). We again evaluate the slope here, and call it \({\bf k}_3\).

\[{\bf k}_3 = {\bf f}(t^n + \tau/2, {\bf y}^n + (\tau/2) {\bf k}_2)\]

4th order Runge-Kutta k3

Finally, we go back to the start and follow \({\bf k}_3\) for the full timestep, to \(t^{n+1}\) and evaluate the slope there, calling it \({\bf k}_4\).

\[{\bf k}_4 = {\bf f}(t^n + \tau, {\bf y}^n + \tau {\bf k}_3)\]

4th order Runge-Kutta k4

We then get the updated solution using a linear combination of the 4 slopes:

\[{\bf y}^{n+1} = {\bf y}^n + \frac{\tau}{6} ({\bf k}_1 + 2 {\bf k}_2 + 2 {\bf k}_3 + {\bf k}_4)\]

4th order Runge-Kutta final

Note the similarity of RK4 to Simpson’s rule for integration.

We’ll again use the orbit_util.py module to get access to the common functions from the other integrators

import orbit_util as ou

Here’s our implementation

def int_rk4(state0, tau, T):

    times = []
    history = []
    
    # initialize time
    t = 0
    
    # store the initial conditions
    times.append(t)
    history.append(state0)
    
    # main timestep loop
    while t < T:
        
        state_old = history[-1]
        
        # make sure that the last step does not take us past T
        if t + tau > T:
            tau = T - t

        # get the RHS
        k1 = ou.rhs(state_old)
         
        state_tmp = state_old + 0.5 * tau * k1
        k2 = ou.rhs(state_tmp)
        
        state_tmp = state_old + 0.5 * tau * k2
        k3 = ou.rhs(state_tmp)
        
        state_tmp = state_old + tau * k3
        k4 = ou.rhs(state_tmp)
        
        # do the final update
        state_new = state_old + tau / 6.0 * (k1 + 2*k2 + 2*k3 + k4)
        t += tau
        
        # store the state
        times.append(t)
        history.append(state_new)
        
    return times, history

Let’s try it out now.

state0 = ou.initial_conditions()

tau = 1.0/12.0

times, history = int_rk4(state0, tau, 1)

fig = ou.plot(history)
../_images/2d6ff31f496dd6640f05fde313852f25d4962b1574573f484b4679e777b4ec43.png

This already looks pretty good, even for this very coarse timestep.

We can look at the convergence of RK4

taus = [0.1, 0.05, 0.025]

for n, tau in enumerate(taus):
    times, history = int_rk4(state0, tau, 1)
    
    if n == 0:
        fig = ou.plot(history, label=rf"$\tau = {tau:6.4f}$")
    else:
        ou.plot(history, ax=fig.gca(), label=rf"$\tau = {tau:6.4f}$")

fig.gca().legend()
<matplotlib.legend.Legend at 0x7ff06c12fa90>
../_images/be278eea48c081b7ce567a2bdd39fa724113bf830312f284c61dd8213768d02d.png
for tau in [0.1, 0.05, 0.025, 0.0125, 0.00625]:
    times, history = int_rk4(state0, tau, 1)
    print(f"{tau:8} : {ou.error_radius(history):10.5g} {ou.error_position(history):10.5g}")
     0.1 :   0.020244     0.1074
    0.05 : 0.00054733  0.0039053
   0.025 : 1.6779e-05 0.00016588
  0.0125 : 5.2225e-07 7.9308e-06
 0.00625 : 1.6305e-08 4.1917e-07

This is clearly converging faster than 2nd order. 4th order means that as we cut the timestep in half, the error should go down by \(2^4\) or 16.

Timestepping#

In the above examples, we always kept the timestep \(\tau\) fixed, but in general, finding the solution to a system of ODEs might have portions in time where the solution is changing rapidly and a smaller \(\tau\) would be needed. Likewise, is the solution is changing slowly, we can use a larger timestep.

Most ODE libraries use some form of local error estimation to measure how the large the error is in the solution and adjust the timestep (up or down) to achieve a desired accuracy.

To see why this might be needed, consider an elliptical orbit.

elliptical orbit

The initial conditions for a planet at perihelion (on the +y axis) are:

\[x = 0\]
\[y = a (1 - e)\]
\[v_x = -\sqrt{\frac{GM_\star}{a} \frac{1+e}{1-e}}\]
\[v_y = 0\]

where \(a\) is the length of the semi-major axis and \(0 \le e < 1\) is the eccentricity. For an eccentric orbit, the velocity changes throughout the orbit, so when the planet is at perihelion, the solution is changing rapidly.

Example:

Integrate an orbit with a large eccentricity (like \(e = 0.8\)). Pick a timestep such that visually the orbit looks okay. Now make a plot of the total energy per unit mass, \(\mathcal{E}\) vs time:

\[\mathcal{E} = \frac{1}{2} |{\bf v}|^2 - \frac{GM_\star}{r}\]

At what point in the orbit is the energy conservation the worst?

First the initial conditions

a = 1.0
e = 0.6

x0 = 0
y0 = a * (1 - e)
u0 = -np.sqrt(ou.GM / a * (1 + e)/(1 - e))
v0 = 0

state0 = ou.OrbitState(x0, y0, u0, v0)

Now we can integrate

tau = 0.025
T = 1

times, history = int_rk4(state0, tau, 1)

fig = ou.plot(history)
../_images/054baa36630ec7bc85d716002c6d7270eef429cdca1956b1fa014ad5502746b3.png

We can compute the energy / unit mass now.

E = [0.5 * (state.u**2 + state.v**2) - ou.GM / np.sqrt(state.x**2 + state.y**2)
     for state in history]

fig = plt.figure()
ax = fig.add_subplot(111)

ax.plot(times, E/E[0])
ax.set_xlabel("t [yr]")
ax.set_ylabel("$\mathcal{E}/\mathcal{E}(t=0)$")
Text(0, 0.5, '$\\mathcal{E}/\\mathcal{E}(t=0)$')
../_images/f732f37831837626e91cb3a8e1a911ebe718a3384b12fdf7ae5d8070c3d567c1.png

Notice that the energy conservation is not good, and the conservation is worst at perihelion when the solution is changing fastest.

C++ implementation#

Here’s a C++ implementation of the 4th order Runge Kutta that follows the same layout as the python version here:

zingale/computational_astrophysics