Title Page

Contents

1. Introduction

2. Key Idea

3. Root finding in one dimension

4. Linear equations

5. Numerical integration

6. First order ordinary differential equations

7. Higher order ordinary differential equations

8. Partial differential equations

The key idea behind numerical solution of odes is
the combination of function values at different points or times
to approximate the derivatives in the required equation. The manner
in which the function values are combined is determined by the
Taylor Series expansion for the point at which the derivative
is required. This gives us a *finite difference* approximation
to the derivative.

Consider a first order ode of the form

(60) |

subject to some boundary/initial condition *f*(*t*=*t*_{0}) = *c.*
The finite difference solution of this equation proceeds by discretising
the independent variable *t*
to *t*_{0},
*t*_{0}*+*D*t*,
*t*_{0}*+*2D*t*,
*t*_{0}*+*3D*t*,
… We shall denote the exact solution at some *t *=* t*_{n} = t_{0}*+n*D*t*
by *y _{n} = y*(

Y'=_{n} f(t,_{n}Y)_{n} | (61) |

at each of the points in the domain.

If we take the Taylor Series expansion for the grid points in
the neighbourhood of some point *t *=* t** _{n}*,

| () |

we may then take linear combinations of these expansions to obtain
an approximation for the derivative *Y' _{n}*
at

| () |

The linear combination is chosen so as to eliminate the term in
*Y _{n}*,
requiring

| (64) |

and, depending on the method, possibly some of the terms of higher
order. We shall look at various strategies for choosing a* _{i}*
in the following sections. Before doing so, we need to consider
the error associated with approximating

The *global truncation error* at the *n*th
step is the cumulative total of the truncation error at the previous
steps and is

E._{n} = Y_{n} - y_{n} | (65) |

In contrast, the *local truncation error* for the *n*th
step is

e=_{n} Y*,_{n} - y_{n} | (66) |

where *y _{n}**
the exact solution of our differential equation but with the initial
condition

The Euler method is the simplest finite difference scheme to understand and implement. By approximating the derivative in ( 61 ) as

Y'=~ (_{n} Y_{n}_{+1} - Y)/D_{n}t, | (67) |

in our differential equation for *Y _{n}*
we obtain

Y_{n+}_{1} = YD_{n} + tf(t,_{n}Y)._{n} | (68) |

Given the initial/boundary condition *Y*_{0} = *c*,
we may obtain *Y*_{1}
from *Y*_{0} *+* D*tf*(*t*_{0},*Y*_{0}),
*Y*_{2}
from *Y*_{1}* + *D*tf*(*t*_{1},*Y*_{1})
and so on, marching forwards through time. This process is shown
graphically in figure 18 .

The Euler method is termed an *explicit* method
because we are able to write down an explicit solution for *Y _{n+}*

The Euler method outlined in the previous section
may be summarised by the update formula *Y _{n}*

The backward Euler method is almost identical to
its explicit relative, the only difference being that the derivative
*Y' _{n}*
is approximated by

Y'=~ (_{n }Y_{n} - Y_{n}_{-1})/Dt, | (69) |

to give the evolution equation

Y_{n}_{+1} = YD_{n} + tf(t_{n}_{+1},Y_{n}_{+1}). | (70) |

This is shown graphically in figure 19 .

The dependence of the right-hand side on the variables at *t _{n}*

As the derivative *Y' _{n}*
is approximated only to the first order, the Backward Euler method
has errors of

The Romberg integration approach (presented in section5.6 )
of using two approximations of different step size to construct
a more accurate estimate may also be used for numerical solution
of ordinary differential equations. Again, if we have the estimate
for some time *t*
calculated using a time step D*t*,
then for both the Euler and Backward Euler methods the approximate
solution is related to the true solution by *Y*(*t*,D*t*)*
= y*(*t*) + *c*D*t*^{2}.
Similarly an estimate using a step size D*t*/2
will follow *Y*(*t*,D*t*/2)*
= y*(*t*) + ^{1}/_{4}*c*D*t*^{2}
as D*t0*.
Combining these two estimates to try and cancel the *O*(D*t*^{2})
errors gives the improved estimate as

Y^{(1)}(t,Dt/2) = [4Y(t,Dt/2) - Y(t,Dt) ]/3. | (71) |

The same approach may be applied to higher order methods such
as those presented in the following sections. It is generally
preferable, however, to utilise a higher order method to start
with, the exception being that calculating both *Y*(*t*,D*t*)
and *Y*(*t*,D*t*/2)
allows the two solutions to be compared and thus the trunctation
error estimated.

If we use *central differences* rather than
the *forward difference* of the Euler method or the *backward
difference* of the backward Euler, we may obtain a second order
method due to cancellation of the terms of *O*(D*t*^{2}).
Using the same discretisation of *t*
we obtain

Y'_{n+}_{1/2 }=~ (Y_{n+}_{1} - Y)/D_{n}t. | (72) |

Substitution into our differential equation for *Y _{n}*
gives

Y_{n+}_{1} - Y)/D_{n}t =~ f(t_{n+}_{1/2},Y_{n+}_{1/2}). | (73) |

The requirement for *f*(*t*_{n+}_{1/2},*Y*_{n+}_{1/2})
is then satisfied by a linear interpolation for *f*
between *t _{n-}*

Y_{n}_{+1} - Y =~ _{n}^{1}/_{2}[f(t_{n}_{+1},Y_{n}_{+1}) + f(t,_{n}Y)]D_{n}t. | (74) |

As with the Backward Euler, the method is implicit and it is not,
in general, possible to write an explicit expression for *Y _{n+}*

Formal proof that the Crank-Nicholson method is second
order accurate is slightly more complicated than for the Euler
and Backward Euler methods due to the linear interpolation to
approximate* f*(*t*_{n+}_{1/2},*Y*_{n+}_{1/2}).
The overall approach is much the same, however, with a requirement
for Taylor Series expansions about *t _{n}*::

(75a) | |

( 75 b) |

Substitution of these into the left- and right-hand sides of equation ( 74 ) reveals

(76a) |

and

( 76 b) |

which are equal up to *O*(D*t*^{3}).

As an alternative, the accuracy of the approximation
to the derivative may be improved by using a linear combination
of additional points. By utilising only *Y _{n-s+}*

Y',_{n} = f_{n} | |

Y"=~ (_{n} f_{n} - f_{n-}_{1})/Dt | (77) |

and so we may construct a second order method as

Y_{n}_{+1} = Y+_{n} DtY'+_{n} ½Dt^{2}Y"_{n} | |

= Y_{n} + ^{1}/_{2}Dt(3f_{n} - f_{n-}_{1}). | (78) |

For *s=*3
we also have *Y'" _{n}*
and so can make use of a second-order one-sided finite difference
to approximate

Y_{n}_{+1} = Y+_{n} DtY'+_{n} ^{1}/_{2}Dt^{2}Y"+_{n} ^{1}/_{6}Dt^{3}Y"_{n} | |

= Y_{n} + ^{1}/_{12}Dt(23f16_{n} - f_{n}_{-1} + 5f_{n}_{-2}). | (79a) |

These methods are called *Adams-Bashforth* methods. Note
that *s *=* *1
recovers the Euler method.

Implicit Adams-Bashforth methods are also possible
if we use information about *f _{n}*

Y',_{n} = f_{n} | |

Y"=~ (_{n} f_{n}_{+1} - f_{n}_{-1})/2Dt | |

Y'"=~ (_{n} f_{n}_{+1} - 2f+_{n} f_{n}_{-1})/Dt^{2}, | (80) |

to give

Y_{n}_{+1} = Y+_{n} DtY'+_{n} ^{1}/_{2}Dt^{2}Y"+_{n} ^{1}/_{6}Dt^{3}Y'"_{n} | |

= Y(1/12)D_{n} + t(5f_{n}_{+1} + 8f - _{n}f_{n}_{-1}). | (81) |

This family of implicit methods is known as *Adams-Moulton*
methods.

The stability of a method can be even more important than its accuracy as measured by the order of the truncation error. Suppose we are solving

y' = ly, | (82) |

for some complex l.
The exact solution is bounded (*i.e.* does not increase without
limit) provided Rel <= 0.
Substituting this into the Euler method shows

Y_{n}_{+1} = (1 + lDt)Y(1 + lD_{n} = t)^{2}Y_{n}_{-1} = … = (1 + lDt)^{n}^{+1}Y_{0}. | (83) |

If *Y _{n}*
is to remain bounded for increasing

|1 + lDt| <= 1. | (84) |

If we choose a time step D*t*
which does not satisfy ( 84 )
then *Y _{n}*
will increase without limit. This condition ( 84 )
on D

The reason why we consider the behaviour of equation
( 82 ) is that it
is a model for the behaviour of small errors. Suppose that at
some stage during the solution process our approximate solution
is *y*_{$}* *=* y *+* *e
where e
is the (small) error. Substituting this into our differential
equation of the form *y' *=* f*(*t*,*y*)
and using a Taylor Series expansion gives

| (85) |

Thus, to the leading order, the error obeys an equation of the
form given by ( 82 ),
with l* *=* f*/*y*.
As it is desirable for errors to decrease (and thus the solution
remain stable) rather than increase (and the solution be unstable),
the limit on the time step suggested by ( 84 )
applies for the application of the Euler method to any ordinary
differential equation. A consequence of the decay of errors present
at one time step as the solution process proceeds is that memory
of a particular time step's contribution to the global truncation
error decays as the solution advances through time. Thus the global
truncation error is dominated by the local trunction error(s)
of the most recent step(s) and *O*(*E** _{n}*)

In comparison, solution of ( 82 ) by the Backward Euler method

Y_{n}_{+1} = YD_{n} + tlY_{n}_{+1}, | (86) |

can be rearranged for *Y _{n}*

Y_{n}_{+1} = Y/(1 - lD_{n}t) = Y_{n}_{-1}/(1 - lDt)^{2} = … = Y_{0}/(1 - lDt)^{n}^{+1}, | (87) |

which will be stable provided

t| > 1. | (88) |

For Rel <= 0 this is always satisfied and so the Backward Euler method is unconditionally stable.

The Crank-Nicholson method may be analysed in a similar fashion with

t/2)Y_{n}_{+1} = (1+lDt/2)Y,_{n} | (89) |

to arrive at

Y_{n}_{+1} = [(1+lDt/2)/(1 - lDt/2)]^{n}^{+1} Y_{0}, | (90) |

with the magnitude of the term in square brackets always less
than unity for Rel* *<* *0.
Thus, like Backward Euler, Crank-Nicholson is unconditionally
stable.

In general, explicit methods require less computation per step, but are only conditionally stable and so may require far smaller step sizes than an implicit method of nominally the same order.

Predictor-corrector methods try to combine the advantages
of the simplicity of explicit methods with the improved stability
and accuracy of implicit methods. They achieve this by using an
explicit method to *predict* the solution *Y _{n}*

The simplest of these methods combines the Euler method as the predictor

Y_{n}_{+1}^{(1)} = Y+_{n} Dtf(t,_{n}Y),_{n} | (91) |

and then the Backward Euler to give the corrector

Y^{(2)}_{n}_{+1} = YD_{n} + tf(t,_{n}Y_{n}_{+1}^{(1)}). | (92) |

The final solution is the mean of these:

Y_{n}_{+1} = (Y_{n}_{+1}^{(1)} + Y_{n}_{+1}^{(2)})/2. | (93) |

To understand the stability of this method we again use the *y' = *l*y*
so that the three steps described by equations ( 91 )
to ( 93 )
become

Y_{n}_{+1}^{(1)} = Y+_{n} lDtY,_{n} | (94a) |

Y_{n}_{+1}^{(2)} = Y+_{n} lDtY_{n}_{+1}^{(1)} | |

Y+_{n} lDt (Y+_{n} lDtY)_{n} | |

t + l^{2}Dt^{2})Y,_{n} | ( 94 b) |

Y_{n}_{+1} = (Y_{n}_{+1}^{(1)} + Y_{n}_{+1}^{(2)})/2 | |

+ lDt)Y(1 + lD_{n} + t + l^{2}Dt^{2}) Y]/2_{n} | |

+ lDt + ^{1}/_{2}l^{2}Dt^{2}) Y_{n} | |

+ lDt + ^{1}/_{2}l^{2}Dt^{2})^{n} Y_{0}. | ( 94 c) |

Convergence requires |1* + *lD*t
+ *^{1}/_{2}l^{2}D*t*^{2}| < 1
(for Rel* *<* *0)
which in turn restricts D*t *< 2|l|^{-1}.
Thus the stability of this method, commonly known as the Improved
Euler method, is identical to the Euler method. This is not surprising
as it is limited by the stability of the initial predictive step.
The accuracy of the method is, however, second order as may be
seen by comparison of ( 94 c)
with the Taylor Series expansion.

The Improved Euler method is the simplest of a family
of similar predictor corrector methods following the form of a
single *predictor* step and one or more *corrector*
steps. The corrector step may be repeated a fixed number of times,
or until the estimate for *Y _{n}*

One subgroup of this family are the Runge-Kutta methods which use a fixed number of corrector steps. The Improved Euler method is the simplest of this subgroup. Perhaps the most widely used of these is the fourth order method:

k^{(1)} = Dtf(t,_{n}Y) ,_{n} | (95a) |

k^{(2)} = Dtf(t_{n}+^{1}/_{2}Dt ,Y½_{n}+k^{(1)}) , | ( 95 b) |

k^{(3)} = Dtf(t_{n}+^{1}/_{2}Dt ,Y½_{n}+k^{(2)}) , | ( 95 c) |

k^{(4)} = Dtf(tD_{n}+t ,Y_{n}+k^{(3)}) , | ( 95 d) |

Y_{n}_{+1} = Y(_{n} + k^{(1)} + 2k^{(2)} + 2k^{(3)} + k^{(4)})/6. | ( 95 e) |

In order to analyse this we need to construct Taylor-Series expansions
for *k*^{(2)} = D*tf*(*t** _{n}*+

Start of this section

Title Page

Contents

Stuart Dalziel, last page update: 17 February 1998