Numerical integration of ordinary differential equations

We will cover only Initial value problems, i.e we are interested in looking at the behaviour of a 'thing' as it evolves over time.

State-space again

Tend to be set out for problems of the form

\[ \dot{\vec{x}}=f(t,\vec{x}) \]

Sometimes expressed with a mass matrix

\[ M(t,\vec{x})\dot{\vec{x}}=f(t,\vec{x}) \]

Note: Numerical solvers will often adapt the step-size to the problem and can use additional information, such as problem constraints or higher order differentials. Solver also needs to know the initial value. System inputs can be included in the process.

Note2: Stiff solvers may be needed for problems where there is a strong coupling between state variables. (e.g. the Robertson's chemical reaction (use your favourite search engine to find out more))

Forward Euler

Suppose we know the continuous time state-space equations for a system.

\[ \dot{x}=f(\vec{x})+g(\vec{u}) \]

For the moment we will consider $f$ and $g$ non-linear functions, although we can always revert them to being matrices $A$ and $B$ if they system is linear.

Approximate this as

\[ \dot{\vec{x}}\approx \frac{\vec{x}(t+\Delta)-\vec{x}(t)}{(t+\Delta)-t} =f(\vec{x}(t))+g(u(t)) \]

the $t$ in the denominator cancel so we can rearrange the equation to be

\[ \vec{x}(t+\Delta)=\vec{x}(t) +\Delta f(\vec{x}(t))+\Delta g(u(t)) \]

If we set $\Delta$ to be fixed this is then a sampled data system so we can effectively integrate the states. If we ignore the input for the moment, and accept that $\vec{x}$ is a function of time, we get

\[ x_{n+1}=x_{n}+\Delta f(x_{n}) \]

And if the system is linear then $f(\vec{x})$ becomes $A\vec{x}$ which is simply a fixed matrix so the equation becomes

\[ x_{n+1}=x_{n}+\Delta A x_{n} \]

note: Forward Euler is known to be poor, but is often used in real-time software. Delta needs to be small to be effective, but this is then expensive in computer time.

challenge: use the $q$ operator to rewrite this equation, and then see if you can sketch it out as a transfer function and as a code fragment.

Backwards Euler

If we look at the approximation for the Forward Euler case (and assuming it is linear with no input), we may be able to write it as

\[ \frac{\vec{x}(t)-\vec{x}(t-\Delta)}{t-(t-\Delta)} =f \vec{x}(t) \]

or

\[ \vec{x}(t)=\vec{x}(t-\Delta)+\Delta f \vec{x}(t) \]

Again if $\Delta$ is constant we can consider this as a sampled data system.

\[ x_{n}=x_{n-1}+\Delta f(t_{n},x_{n}) \]

Effectively now are looking backwards in time for the state, but we have already computed the state! This may be possible for some systems but in most cases we will look for more sophisticated numerical integrators, such as Runge Kutta

Runge Kutta (RK4)

(Here we change from using $\vec{x}$ as a vector of states to $y$ that can also be a vector of states - that way we don't need to change the wikimedia figure!)

$\dot {y}=f(t,y)$ with initial values $y(0)=y_{0}$.

Gradients used to compute Runge Kutta RK4. (Image from wikimedia)

For a positive step size h compute in order the following values of the function to be integrated (i.e. the function gradients)

You now have 4 gradients as shown in
  1. $k_1=f(t_0,y_0)$
  2. $k_2=f(t_0+\frac{h}2,y_0+\frac{h}2 k_1)$ i.e use $k_1$ to locate a point near the true curve a half step away. This requires increment $t_0$ by $\frac{h}2$ and increment $y_0$ by $\frac{h}2 k_1$
  3. $k_3=f(t_0+\frac{h}2,y_0+\frac{h}2 k_2)$ i.e. use $k_2$ to make another half step estimate
  4. $k_3=f(t_0+h,y_0+\frac{h} k_3)$ i.e. use $k_3$ to make a full step estimate.
the figure.

The function estimate used as the result is then

\begin{align} y_{1}&=y_{0}+h{\frac{1}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right)\\ t_{1}&=t_{0}+h \label{eq:sample} \end{align}

See\eqref{eq:sample}

Repeat for the new values of $t_1$ and $y_1$ so in general

\begin{align} y_{n+1}&=y_{n}+h{\frac {1}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right)\\ t_{n+1}&=t_{n}+h \end{align}Where $k_1, k_2, k_3$ and $k_4$ are defined by the figure