# 22 Linear stability analysis

# Introduction

We know that if an ordinary differential equation is equal to zero for some density, that is an *equilibrium* of the system, meaning that if we reach that exact density, the model would say we stay at that density forever. Two important and related questions are (1) “are we are ever likely to reach that equilibrium?” and (2) “if we move a small distance away from that equilibirum will we be pulled back towards it?” These questions are about the *stability* of an equilibrium.

# One-Dimensional systems

Suppose we have some population with density [latex]X[/latex] whose dynamics are governed by some ODE [latex]dX/dt[/latex]. We can imagine sketching [latex]dX/dt[/latex] as a function of [latex]X[/latex]. Where the curve passes through zero we have an equilibrium. How can we tell if that equilibrium is stable?

If we look at the curve close to the equilibrium there are generically two forms it could take:

- The curve starts above zero and crosses to below zero.
- The curve starts below zero and crosses to above zero.

(In theory we could have a curve that just ‘touches’ zero and then goes back the same way, but this would be a very special type of point).

In case 1, for values of [latex]X[/latex] less than the equilibrium we have [latex]dX/dt\gt0[/latex], so the value of [latex]X[/latex] will be increasing towards the equilibrium. Similarly, for values of [latex]X[/latex] greater than the equilibrium we have [latex]dX/dt\lt0[/latex], so the value of [latex]X[/latex] will be decreasing towards the equilibrium. In both directions, we are pulled in towards the equilibrium so it is *stable*.

In case 2, for values of [latex]X[/latex] less than the equilibrium we have [latex]dX/dt\lt0[/latex], so the value of [latex]X[/latex] will be decreasing away from the equilibrium. Similarly, for values of [latex]X[/latex] greater than the equilibrium we have [latex]dX/dt\gt0[/latex], so the value of [latex]X[/latex] will be increasing away from the equilibrium. In both directions, we are pulled away from the equilibrium so it is *un**stable*.

This hopefully makes sense in terms of sketching the curve, but we can calculate this stability directly from the ODE. If we think about those two cases, what we want to know is whether the curve defined by [latex]dX/dt[/latex] is decreasing (case 1) or increasing (case 2) at the equilibrium. In other words, if we were to approximate the dynamics as a straight line, would its gradient be positive or negative? We can find this by *linearising* the system. For ease of writing, let us set [latex]dX/dt=f(X)[/latex]. We can then take a *Taylor expansion* of our system near the equilibrium, which we’ll call [latex]X^*[/latex], as,

[latex]\begin{equation} f(X)=f(X^*)+\frac{df(X^*)}{dX}(X-X^*)+\frac{1}{2}\frac{d^2f(X^*)}{dX^2}(X-X^*)^2+... \end{equation}[/latex]

Now, since we are near the equilibrium we know that [latex]f(X^*)[/latex], that is the value of [latex]dX/dt[/latex] at the equilibrium [latex]X^*[/latex] is zero. Again, assuming we are near the equilibrium we can also say terms like [latex](X-X^*)^2[/latex] are extremely small. We can therefore approximate the dynamics near the equilibrium by the linear system [latex]dX/dt=(df(X^*)/dX)(X-X^*)[/latex]. Notice that this involves precisely the gradient we have talked about, [latex]df(X^*)/dX[/latex].

In short, then, to determine whether an equilibrium is stable in a one-dimensional system, we take the derivative of the ODE with respect to the variable, substitute in the value of the variable at the equilibrium, and check whether it is positive or negative.

# Two-dimensional systems

We made the case for stability depending on the linearised system in a one-dimensional system from a graphical argument about the gradient of the ODE itself near an equilibrium. For a two-dimensional system this is slightly harder to picture, but in fact the basic argument is the same. If we are near an equilibrium we can take a Taylor expansion of our system and approximate the dynamics by the linear terms. Now assume we have a generic system given by,

[latex]\begin{align} \frac{dX}{dt}&=f(X,Y)\\ \frac{dY}{dt}&=g(X,Y) \end{align}[/latex]

and that there is an equilibrium at [latex](X^*,Y^*)[/latex]. The Taylor expansion of the first ODE near to the equilibrium gives,

[latex]\begin{equation} f(X,Y)=f(X^*,Y^*)+\frac{df(X^*,Y^*)}{dX}(X-X^*)+\frac{df(X^*,Y^*)}{dY}(Y-Y^*)+\frac{1}{2}\frac{d^2f(X^*,Y^*)}{dX^2}(X-X^*)^2+\frac{1}{2}\frac{d^2f(X^*,Y^*)}{dY^2}(Y-Y^*)^2+\frac{d^2f(X^*,Y^*)}{dXdY}(X-X^*)(Y-Y^*)+... \end{equation}[/latex]

and similarly for [latex]g(X,Y)[/latex]. Making the same assumptions about being near the equilibrium we arrive at the linearised system,

[latex]\begin{align} \frac{dX}{dt}&=\frac{df(X^*,Y^*)}{dX}(X-X^*)+\frac{df(X^*,Y^*)}{dY}(Y-Y^*)\\ \frac{dY}{dt}&=\frac{dg(X^*,Y^*)}{dX}(X-X^*)+\frac{dg(X^*,Y^*)}{dY}(Y-Y^*). \end{align}[/latex]

This leaves us with a two-dimensional linear system. We won’t go into deep detail here as to how to analyse such systems – if you need more background any textbook or lecture notes on *planar linear systems of ODEs* will give you the detail you need. We will jump ahead and say that we know that, if we set [latex]x=X-X^*[/latex] and [latex]y=Y-Y^*[/latex], then the solution to such a system can be written down as,

[latex]\begin{align} x(t)&=x_1e^{\lambda_1 t}+x_2e^{\lambda_2 t}\\ y(t)&=y_1e^{\lambda_1 t}+y_2e^{\lambda_2 t}. \end{align}[/latex]

In this case our equilibrium is at [latex](x,y)=(0,0)[/latex]. We can therefore say that we would move towards the equilibrium – meaning it is stable – if [latex]\lambda_1,/lambda_20[/latex], otherwise we will move further away from it - meaning it is unstable. These two values are the <em>eigenvalues of the system. These can be found by writing out the *Jacobian matrix* of the system, which is given by,

[latex]\begin{equation*} J=\left(\begin{array}{cc} \frac{df(X^*,Y^*)}{dX} & \frac{df(X^*,Y^*)}{dY} \\ \frac{dg(X^*,Y^*)}{dX} & \frac{dg(X^*,Y^*)}{dY}\end{array}\right). \end{equation*}[/latex]

Formally we find the eigenvalues by writing out the *characteristic equation*, given by [latex]\det(J-\lambda I)=0[/latex] where [latex]I[/latex] is the identity matrix. This will give us a quadratic equation in [latex]\lambda[/latex] that we can try and solve. Often however we will just look at the signs of *trace* and *determinant* of [latex]J[/latex], since these dictate the signs of the eigenvalues. In particular,

- if [latex]\det(J)>0[/latex] and [latex]tr(J)0[/latex], the equilibrium is stable,</li>
- if [latex]\det(J)>0[/latex] and [latex]tr(J),0[/latex], the equilibrium is unstable,
- if [latex]\det(J)0[/latex] the equilibrium is a saddle (one eigenvalue is positive, the other negative).</li>

We can add further detail by noting that if the eigenvalues are complex we will see *spirals *(and this occurs when [latex]tr^2-4\det0[/latex]), but if they are purely real we will see <em>nodes.

# Higher-Dimensional systems

The approach outlined above for two-dimensional systems can be extended to three-dimensional or higher systems, but is generally much harder work. Ultimately we look to linearise the system and find the eigenvalues. However, since the characteristic equation will now be a cubic or even higher, there is no simple formula to solve it. Sometimes we will get lucky and it will nicely factorise - a couple of examples like this are seen in chapters 9 and 15 - but usually it is too complicated. We can also use an extension of the trace-determinant approach. In fact, this is the special case for two-dimensions of a method for testing stability called the *Routh-Hurwitz criteria*. This approach involves looking at the coefficients of the characteristic equation (the first and last of which are always the trace and determinant of your Jacobian) and checking their sign. Again, sometimes we can use this approach without too much trouble (like in chapter 6) but it is often challenging.