Department of Applied Mathematics and Theoretical Physics

Numerical Analysis Course

Stability of Finite Difference Methods

The stability of finite difference methods is essential to using an appropriate methods for computing numerical solutions

Contents

Introduction

We can solve various Partial Differential Equations with initial conditions using a finite difference scheme. The diffusion equation, for example, might use a scheme such as:

$$u_m^{n+1} = u_m^n + \mu(u_{m-1}^n-2u_m^n + u_{m+1}^n)$$

Where $u_m^n \approx u(mh,, nk)$ a solution of $u_t = u_{xx}$ and $\mu = \frac{k}{h^2}$.

Implementation

We can implement these finite difference methods in MATLAB using (sparce) Matrix multiplication. Using the example given above we have:

$$u^{n+1} = \pmatrix{ 1-2\mu & \mu & & \cr \mu & \ddots & \ddots & \cr &
\ddots & \ddots & \mu \cr & & \mu & 1-2\mu} u^n$$

Generating this sparce, tridiagonal matrix can be done using the spdiags function. For example (with $\mu = 0.5$):

e = ones(5, 1);
A = spdiags( [0.5*e (1-2*0.5)*e 0.5*e], -1:1, 5, 5);
disp(full(A))
         0    0.5000         0         0         0
    0.5000         0    0.5000         0         0
         0    0.5000         0    0.5000         0
         0         0    0.5000         0    0.5000
         0         0         0    0.5000         0

full is necessary to display the whole matrix rather than the sparce data which is is holding.

For other methods we modify the matrix, or matrices, for each time step by the method is the same. For a full implementation see ...

Stability and Convergence

An important feature that we wish our methods to have is convergence: (roughly) as mesh size tends to zero, we want our numerical solution to tend (uniformly) to the true solution.

In the lecture notes, we can see the for the method discussed above convergence depends on $\mu$.

The lecture notes also define what it means for a numerical method to be stable: (roughly) the numerical method remains bounded (in a given interval) for zero boundary conditions. From the Lax equivalence theorem, we are given that for suitable PDEs and methods with error $O(h^{p+1}), p \ge 2$ stability and convergence are equivalent.

The GUI

Running the downloadable MATLAB code on this page opens a GUI which allows you to experiment with varying Courant numbers, methods, initial conditions and mesh size.

stability

The Diffusion Equation (Euler Method)

This is the method which has already been discussed earlier on this page. We know from our notes that this will be stable for $\mu \leq 1/2$ and unstable for $\mu > 1/2$. Try using $\mu = 0.5$ and $\mu = 0.51$. The wave will move very slowly (each timestep for a mesh size of 100 will be: $5 \times 10^{-5}$) however, if we want to take larger steps through time ($5.1 \times 10^{-5}$ for instance) the method becomes unstable.

The Diffusion Equation (Crank-Nicolson)

We obtained the Euler Method by applying the Euler method to the semidiscretization $\frac{du_m}{dt} = \frac{1}{h^2}(u_{m-1}-2u_m+u_{m+1})$. Using the trapezoidal rule we obtain the Crank-Nicolson method which is stable for all $\mu$. This means we can choose larger time steps and not suffer from the same instabilities experienced using the Euler Method

The Wave Equation

$$ \frac{\partial^2u}{\partial t^2} = \frac{\partial^2u}{\partial x^2}$$

By approximating both second derivatives using finite differences, we can obtain a scheme to approximate the wave equation. The main difference here is that we must consider a second set of inital conditions: $u_t$. For the purposes of the illustration we have assumed that this is $0$.

The method obtained in this way is stable for $\mu = \frac{k^2}{h^2} \leq 1$. The additional power of $k$ means that we can take $k$ and $h$ to be similar in order and remain stable. (Although we must ensure that $\mu \leq 1$. Try running the wave equation for $\mu = 1$ and $\mu = 1.01$

Initial conditions

You can choose your own initial conditions - some which you might like to try include $sin(pi*x), sin(2*pi*x), heaviside(x-0.5)$.

Boundary conditions

It is also possible to change the boundary conditions which the methods use. The general boundary conditions can be "Robin" boundary conditions: $$a u + b \frac{\partial u}{\partial n} = g$$ on the boundary of the domain. On the case of the domain $[0,1]$ this reduces to: $$au(0)-bu'(0) = g(0), au(1)+bu'(1) = g(1)$$ From this general form, we can obtain Dirichlet boundary conditions - letting $a = 1, b = 0$ and Neumann boundary conditions - letting $a = 0, b = 1$.

The way in which the code implements boundary conditions is somewhat special. For Dirichlet boundary conditions it is reasonably simple, since we just need to fix the values $u_0 = g(0), u_{M+1} = g(1)$. For Neumann boundary conditions we could use the one-sided approximation of the normal deriviative $\frac{u_1-u_0}{\Delta x} = -g(0)$ but this would only yeild a first order method. In order to sustain a second order method, we need to be more careful about our choice of approximation for $\frac{\partial u}{\partial n}$.

The way we do this involves introducing "Ghost points" which are never actually used or referenced in the code, $u_{-1}$ and $u_{M+2}$. Then we can use a symmetric approximation to the normal deriviative: $\frac{u_1-u_{-1}}{2\Delta x} = -g(0)$. This allows us to compute a value for our ghost point ($u_{-1} = u_{1} + 2 \Delta x g(0)$). We can then plug this into the general form of our equation.

Working out Robin conditions involves using the same process as for Dirichlet conditions again.

Code

All files as .zip archive: pde_stability_all.zip