Planar Systems of Differential Equations Solutions
\(\newcommand{\trace}{\operatorname{tr}} \newcommand{\real}{\operatorname{Re}} \newcommand{\imaginary}{\operatorname{Im}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)
A first-order linear system of \(n\) equations and \(n\) variables is any system that can be written in the form
\begin{align*} \frac{dx_1}{dt} & = a_{11}(t) x_1(t) + \cdots + a_{1n}(t) x_n(t) + f_1(t),\\ \frac{dx_2}{dt} & = a_{21}(t) x_1(t) + \cdots + a_{2n}(t) x_n(t) + f_2(t),\\ & \vdots\\ \frac{dx_n}{dt} & = a_{n1}(t) x_1(t) + \cdots + a_{nn}(t) x_n(t) + f_n(t). \end{align*}
If each of the coefficients is constant and the functions \(f_i\) vanish, then we have a homogeneous first-order linear system with constant coefficients,
\begin{align*} \frac{dx_1}{dt} & = a_{11} x_1 + \cdots + a_{1n} x_n\\ \frac{dx_2}{dt} & = a_{21} x_1 + \cdots + a_{2n} x_n,\\ & \vdots\\ \frac{dx_n}{dt} & = a_{n1} x_1 + \cdots + a_{nn} x_n. \end{align*}
We will concentrate on \(2 \times 2\) homogeneous first-order linear systems or planar systems for the time being,
\begin{align} \frac{dx}{dt} & = a x + b y,\label{equation-linear02-2x2-1}\tag{3.2.1}\\ \frac{dy}{dt} & = c x + d y.\label{equation-linear02-2x2-2}\tag{3.2.2} \end{align}
Subsection 3.2.1 Planar Systems and \(2 \times 2\) Matrices
¶We will use linear systems of differential equations to illustrate how we can use systems of differential equations to model how subtances flow back and forth between two or more compartments. Suppose that we have two tanks (\(A\) and \(B\)) between which a mixture of brine flows (Figure 3.2.1). Tank \(A\) contains 300 liters of water in which 100 kilograms of salt has been dissolved and Tank \(B\) contains 300 liters of pure water. Fresh water is pumped into Tank \(A\) at the rate of 500 liters per hour, and brine is pumped into Tank \(B\) from Tank \(A\) at the rate of 900 liters per hour. Brine is also pumped back into Tank \(A\) from Tank \(B\) at the rate of 400 liters per hour, and an additional 500 liters of brine per hour is drained from Tank \(B\text{.}\) All brine mixtures are well-stirred. If we let \(x = x(t)\) be the amount of salt in Tank \(A\) at time \(t\) and \(y = y(t)\) be the amount of salt in Tank \(B\) at time \(t\text{,}\) then we know that
\begin{align*} x(0) & = 100\\ y(0) & = 0 \end{align*}
We know that the salt concentrations in the two tanks are \(x/300\) kilograms per liter and \(y/300\) kilograms per liter. Thus, we can describe the rate of change in each tank with a differential equation,
\begin{align*} \frac{dx}{dt} & = - 900 \cdot \frac{x}{300} + 400 \cdot \frac{y}{300} = - 3 x + \frac{4}{3} y,\\ \frac{dy}{dt} & = 900 \cdot \frac{x}{300} - 400 \cdot \frac{y}{300} - 500 \cdot \frac{y}{300} = 3x - 3 y. \end{align*}
Matrix notation gives us a convenient way of representing the \(2 \times 2\) system (3.2.1)–(3.2.1). If we let
\begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \quad\text{and}\quad {\mathbf x}(t) = \begin{pmatrix} x(t) \\ y(t) \end{pmatrix}, \end{equation*}
then we can rewrite our system as
\begin{equation*} \begin{pmatrix} x'(t) \\ y'(t) \end{pmatrix} = \begin{pmatrix} ax(t) + b y(t) \\ cx(t) + d y(t) \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix}. \end{equation*}
In other words, we can write our system as
\begin{equation*} \frac{d \mathbf x}{dt} = A {\mathbf x}, \end{equation*}
where
\begin{equation*} \mathbf x' = \frac{d \mathbf x}{dt} = \begin{pmatrix} x'(t) \\ y'(t) \end{pmatrix}. \end{equation*}
Subsection 3.2.2 Systems of Differential Equations
¶A linear planar system
\begin{align*} x' & = ax + by\\ y' & = cx + dy \end{align*}
has an equilibrium solution at \((x_0, y_0)\) if
\begin{align*} a x_0 + b y_0 & = 0,\\ c x_0 + d y_0 & = 0. \end{align*}
The following proposition tells us exactly where to find the equilibrium solutions of a linear system with constant coefficients.
Theorem 3.2.2
Let
\begin{equation*} \frac{d {\mathbf x}}{dt} = A {\mathbf x} \end{equation*}
be a \(2 \times 2\) linear system, where \(A\) is not the zero matrix.
- If \(det(A) \neq 0\text{,}\) then \((x, y) = (0, 0)\) is the unique equilibrium solution for the system.
- If \(det(A) = 0\text{,}\) then the equilibrium solutions for the system form a straight line in \({\mathbb R}^2\text{.}\)
Now let us attack the problem of finding all of the solutions of the system \({\mathbf x}' = A {\mathbf x}\text{.}\) Suppose that we can find a nonzero vector \({\mathbf v}_0\) such that \(A {\mathbf v}_0 = \lambda {\mathbf v}_0\) for some real number \(\lambda\text{.}\) In this case, the matrix \(A\) just sends the vector \({\mathbf v}_0\) to a vector on the same line through the origin, \(\lambda {\mathbf v}_0\text{.}\) This is a very special case of course; however, we claim that
\begin{equation*} {\mathbf x}(t) = e^{\lambda t} {\mathbf v}_0 \end{equation*}
is a solution for our linear system if we can find such a vector. To see that this is indeed the case, we compute
\begin{align*} {\mathbf x}'(t) & = \lambda e^{\lambda t} {\mathbf v}_0\\ & = e^{\lambda t} (\lambda {\mathbf v}_0)\\ & = e^{\lambda t} (A {\mathbf v}_0 )\\ & = A( e^{\lambda t} {\mathbf v}_0)\\ & = A {\mathbf x}(t). \end{align*}
In other words, the key to solving a linear system \({\mathbf x}' = A {\mathbf x}\) is to be able to find eigenvalues and eigenvectors for the matrix \(A\text{.}\) We are now ready to state the results of our discussion in a theorem.
Theorem 3.2.3
Let \({\mathbf v}_0\) be an eigenvector for the matrix \(A\) with associated eigenvalue \(\lambda\text{.}\) Then the function \({\mathbf x}(t) = e^{\lambda t}{\mathbf v}_0\) is a solution of the system \({\mathbf x}' = A {\mathbf x}\text{.}\)
We say that the solution \({\mathbf x}(t) = e^{\lambda t}{\mathbf v}_0\) is a straight-line solution. The vector \(e^{\lambda t}{\mathbf v}_0\) lies on the same line for each value of \(t\text{.}\) Note that if \({\mathbf v}_0\) is an eigenvector for \(A\text{,}\) then any nonzero multiple of \({\mathbf v}_0\) is also an eigenvector for \(A\text{,}\)
\begin{equation*} A(\alpha {\mathbf v}_0) = \alpha A{\mathbf v}_0 = \alpha (\lambda {\mathbf v}_0) = \lambda (\alpha {\mathbf v}_0). \end{equation*}
Example 3.2.4
Consider the system
\begin{align*} x' & = x + 3y\\ y' & = x - y. \end{align*}
We can rewrite this system in matrix form as \({\mathbf x}' = A {\mathbf x}\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 1 & 3 \\ 1 & -1 \end{pmatrix}. \end{equation*}
The matrix \(A\) has an eigenvector \({\mathbf u} = (3, 1)\) with associated eigenvalue \(\lambda = 2\text{,}\) since
\begin{equation*} A \mathbf u = \begin{pmatrix} 1 & 3 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 3 \\ 1 \end{pmatrix} = \begin{pmatrix} 6 \\ 2 \end{pmatrix} = 2 \begin{pmatrix} 3\\1 \end{pmatrix} = \lambda \mathbf u. \end{equation*}
Similarly, \({\mathbf v} = (1, -1)\) is an eigenvalue for \(A\) with associated eigenvector \(\mu = -2\text{.}\) Thus, we have two solutions for our system: the equilibrium solution at the origin, the solution
\begin{equation*} {\mathbf x}_1(t) = e^{2t} \begin{pmatrix} 3 \\ 1 \end{pmatrix}, \end{equation*}
and the solution
\begin{equation*} {\mathbf x}_2(t) = e^{-2t} \begin{pmatrix} 1 \\ -1 \end{pmatrix}. \end{equation*}
Since
\begin{align*} \frac{d}{dt} (c_1 {\mathbf x}_1(t) + c_2 {\mathbf x}_2(t)) & = c_1\frac{d}{dt} {\mathbf x}_1(t) + c_2 \frac{d}{dt} {\mathbf x}_2(t)\\ & = c_1 A {\mathbf x}_1(t) + c_2 A {\mathbf x}_2(t)\\ & = A( c_1 {\mathbf x}_1(t) + c_2 {\mathbf x}_2(t)), \end{align*}
any linear combination of solutions to a linear system is also a solution. Thus, a general solution to our system is
\begin{equation*} \mathbf x(t) = c_1 e^{2t} \begin{pmatrix} 3 \\ 1 \end{pmatrix} + c_2 e^{-2t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \end{equation*}
or
\begin{align*} x(t) & = 3 c_1 e^{2t} + c_2 e^{-2t}\\ y(t) & = c_1 e^{2t} - c_2 e^{-2t}. \end{align*}
If we are given initial conditions, say \(x(0) = 0\) and \(y(0) = 1\text{,}\) then we can determine \(c_1\) and \(c_2\) by solving the linear system of equations
\begin{align*} 3 c_1 + c_2 & = 0\\ c_1 - c_2 & = 1 \end{align*}
to get \(c_1 = 1/4\) and \(c_2 = -3/4\text{.}\) Thus, the solution to our initial value problem is
\begin{align*} x(t) & = \frac{3}{4} e^{2t} - \frac{3}{4} e^{-2t}\\ y(t) & = \frac{1}{4} e^{2t} + \frac{3}{4} e^{-2t}. \end{align*}
If \({\mathbf x}_1(t)\) and \({\mathbf x}_2(t)\) are solutions to the linear system \({\mathbf x}' = A {\mathbf x}\text{,}\) then
\begin{align*} {\mathbf x}_1' & = A {\mathbf x}_1\\ {\mathbf x}_2' & = A {\mathbf x}_2. \end{align*}
Thus, for any two real numbers \(c_1\) and \(c_2\text{,}\)
\begin{align*} \frac{d}{dt} (c_1 {\mathbf x}_1(t) + c_2 {\mathbf x}_2(t)) & = \alpha \frac{d}{dt} {\mathbf x}_1(t) + c_2 \frac{d}{dt} {\mathbf x}_2(t)\\ & = c_1 A {\mathbf x}_1(t) + c_2 A {\mathbf x}_2(t)\\ & = A (c_1 {\mathbf x}_1(t) + c_2 {\mathbf x}_2(t) ). \end{align*}
We state this result in the following theorem.
Theorem 3.2.5 Principle of Superposition
If \(A\) is a \(2 \times 2\) matrix, then any linear combination of solutions to the linear system \({\mathbf x}' = A {\mathbf x}\) is also a solution.
Revisiting the mixing problem that we posed at the beginning of this section, we have the following initial value problem,
\begin{align*} \frac{dx}{dt} & = - 3 x + \frac{4}{3} y,\\ \frac{dy}{dt} & = 3 x - 3 y,\\ x(0) & = 100,\\ y(0) & = 0. \end{align*}
If we write our system in matrix form, \({\mathbf x}' = A {\mathbf x}\text{,}\) then
\begin{equation*} A = \begin{pmatrix} -3 & 4/3 \\ 3 & -3 \end{pmatrix}. \end{equation*}
It is easy to check that we have eigenvectors \({\mathbf u} = (2, 3)\) and \({\mathbf v} = (-2, 3)\) with eigenvectors \(\lambda = -1\) and \(\mu = -5\text{,}\) respectively. Thus, we have two solutions to our system,
\begin{align*} {\mathbf x}_1(t) & = e^{-t} {\mathbf u},\\ {\mathbf x}_2(t) & = e^{-5t} {\mathbf v}. \end{align*}
Since any linear combination of solutions is also a solution,
\begin{equation*} {\mathbf x}(t) = c_1 \begin{pmatrix} 2e^{-t} \\ 3e^{-t} \end{pmatrix} + c_2 \begin{pmatrix} -2e^{-5t} \\ 3e^{-5t} \end{pmatrix} \end{equation*}
is a solution to our system. Using the initial values \(x(0) = 100\) and \(y(0) = 0\text{,}\) we can determine that \(c_1 = 25\) and \(c_2 = -25\text{.}\) We now have the solution that we seek,
\begin{align*} x(t) & = 50 e^{-t} + 50 e^{-5t}\\ y(t) & = 75 e^{-t} - 75 e^{-5t}. \end{align*}
Subsection 3.2.3 Solving Linear Systems
¶Our goal is to prove the following theorem.
Theorem 3.2.6
Suppose that \(A\) has a pair of distinct real eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) with associated eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{.}\) Then the general solution of the linear system \({\mathbf x}' = A {\mathbf x}\) is given by
\begin{equation*} {\mathbf x}(t) = c_1 e^{\lambda_1 t} {\mathbf v}_1 + c_2 e^{\lambda_2 t} {\mathbf v}_2. \end{equation*}
Lemma 3.2.7
Let \(A\) be an \(2 \times 2\) matrix with a pair of distinct real eigenvalues, \(\lambda_1\) and \(\lambda_2\) and eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) respectively. Then \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.
Proof
If \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly dependent, then there exists \(\alpha \neq 0\) such that
\begin{equation} {\mathbf v}_1 = \alpha {\mathbf v}_2.\label{equation-linear02-distinct-real-eigenvalues-1}\tag{3.2.3} \end{equation}
Multiplying both sides of this equation by \(A\text{,}\) we have
\begin{equation} \lambda_1 {\mathbf v}_1 = A{\mathbf v}_1 = \alpha A {\mathbf v}_2 = \alpha \lambda_2 {\mathbf v}_2.\label{equation-linear02-distinct-real-eigenvalues-2}\tag{3.2.4} \end{equation}
One the other hand, we obtain
\begin{equation} \lambda_2 {\mathbf v}_1 = \alpha \lambda_2 {\mathbf v}_2\label{equation-linear02-distinct-real-eigenvalues-3}\tag{3.2.5} \end{equation}
if we multiply both sides of (3.2.3) by \(\lambda_2\text{.}\) Using (3.2.4) and (3.2.5), we can conclude that
\begin{equation*} (\lambda_1 - \lambda_2) {\mathbf v}_1 = \alpha(\lambda_2 - \lambda_2 ){\mathbf v}_2 = 0 {\mathbf v}_2 = {\mathbf 0}. \end{equation*}
However, this contradicts the assumption that \(\lambda_1\) and \(\lambda_2\) are distinct.
We can now proceed to the proof of the theorem. Suppose that we have a linear system \({\mathbf x}' = A{\mathbf x}\) such that \(A\) has a pair of distinct real eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) with associated eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{.}\) By the Principle of Superposition, we know that
\begin{equation*} {\mathbf x}(t) = c_1 e^{\lambda_1 t} {\mathbf v}_1 + c_2 e^{\lambda_2 t} {\mathbf v}_2. \end{equation*}
is a solution to the linear system \({\mathbf x}' = A {\mathbf x}\text{.}\) To show that this is the general solution, we must show that we can choose \(c_1\) and \(c_2\) to satisfy a given initial condition \({\mathbf x}_0 = {\mathbf x}(0) = (x_0, y_0)\text{.}\) By Lemma 3.2.7, we know that \({\mathbf v}_1\) and \({\mathbf v}_2\) form a basis for \({\mathbb R}^2\text{.}\) That is, we can write \({\mathbf x}_0\) as a linear combination of \({\mathbf v}_1\) and \({\mathbf v}_2\text{.}\) In other words, we can find \(c_1\) and \(c_2\) such that
\begin{equation*} {\mathbf x}_0 = {\mathbf x}(0) = c_1 {\mathbf v}_1 + c_2 {\mathbf v}_2. \end{equation*}
It remains to show that \({\mathbf x}(t) = c_1 e^{\lambda_1 t} {\mathbf v}_1 + c_2 e^{\lambda_2 t} {\mathbf v}_2\) is the unique solution to the system
\begin{align*} {\mathbf x}'(t) & = A {\mathbf x}(t),\\ {\mathbf x}(0) & = {\mathbf x}_0. \end{align*}
Suppose that there is another solution \({\mathbf y}(t)\) such that \({\mathbf y}(0) = {\mathbf x}_0\text{.}\) Then we can write
\begin{equation*} {\mathbf y}(t) = f(t) {\mathbf v}_1 + g(t) {\mathbf v}_2, \end{equation*}
where
\begin{align*} f(0) & = c_1,\\ g(0) & = c_2. \end{align*}
Since \({\mathbf y}(t)\) is a solution to our system of equations, we know that
\begin{equation*} A {\mathbf y}(t) = {\mathbf y}'(t) = f'(t) {\mathbf v}_1 + g'(t) {\mathbf v}_2. \end{equation*}
On the other hand,
\begin{equation*} A {\mathbf y}(t) = f(t) A {\mathbf v}_1 + g(t) A {\mathbf v}_2 = \lambda_1 f(t) {\mathbf v}_1 + \lambda_2 g(t) {\mathbf v}_2. \end{equation*}
Consequently, we have two first-order initial value problems,
\begin{align*} f'(t) & = \lambda_1 f(t),\\ f(0) & = c_1, \end{align*}
and
\begin{align*} g'(t) & = \lambda_2 g(t),\\ g(0) & = c_2. \end{align*}
The solutions of these initial value problems are
\begin{align*} f(t) & = c_1 e^{\lambda_1 t},\\ g(t) & = c_2 e^{\lambda_2 t}, \end{align*}
respectively. Thus, \({\mathbf y}(t) = {\mathbf x}(t)\text{,}\) and proof our theorem is complete.
Subsection 3.2.4 Important Lessons
¶- If \({\mathbf v}_1\) and \({\mathbf v}_2\) are eigenvectors of two distinct real eigenvalues of a matrix \(A\text{,}\) then \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.
- The Principle of Superposition tells us that any linear combination of solutions to the linear system \({\mathbf x}' = A {\mathbf x}\) is also a solution.
- Let \(A\) be a \(2 \times 2\) matrix. If \(A\) has a pair of distinct real eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) with associated eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) then the general solution of the linear system \({\mathbf x}' = A {\mathbf x}\) is given by
\begin{equation*} {\mathbf x}(t) = \alpha e^{\lambda_1 t} {\mathbf v}_1 + \beta e^{\lambda_2 t} {\mathbf v}_2. \end{equation*}
Subsection Exercises
¶Solving Linear Systems with Distinct Real Eigenvalues
1
\begin{align*} x' & = x + 2y\\ y' & = -x + 4y \end{align*}
2
\begin{align*} x' & = -x + 3y\\ y' & = -3x - y \end{align*}
3
\begin{align*} x' & = -2 x + y\\ y' & = -9x + 4y \end{align*}
4
\begin{align*} x' & = -2 x + y\\ y' & = -9x + 4y \end{align*}
Solving Initial Value Problems
Solve each of the following linear systems for the given initial values. in Exercise Group 3.2.5–8.
5
\begin{align*} x' & = x + 2y\\ y' & = -x + 4y\\ x(0) & = 3\\ y(0) & = 2 \end{align*}
6
\begin{align*} x' & = -x + 3y\\ y' & = -3x - y\\ x(0) & = 3\\ y(0) & = 2 \end{align*}
7
\begin{align*} x' & = -2 x + y\\ y' & = -9x + 4y\\ x(0) & = 5\\ y(0) & = 3 \end{align*}
8
\begin{align*} x' & = 6x + 4y\\ y' & = -8x - 6y\\ x(0) & = 1\\ y(0) & = -2 \end{align*}
9
Consider the nonhomogeneous system of linear differential equations
\begin{align} x' & = a(t)x + b(t)y + f(t)\label{equation-exercise-linear02-nonhomogeneous-system}\tag{3.2.6}\\ y' & = c(t)x + d(t)y + g(t)\tag{3.2.7} \end{align}
and assume that the general solution of
\begin{align*} x' & = a(t)x + b(t)y\\ y' & = c(t)x + d(t)y \end{align*}
is given by
\begin{equation*} {\mathbf x}_h = \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = c_1 \begin{pmatrix} u_1(t) \\ u_2(t) \end{pmatrix} + c_2 \begin{pmatrix} v_1(t) \\ v_2(t) \end{pmatrix}. \end{equation*}
If
\begin{equation*} {\mathbf x}_p = \begin{pmatrix} \phi_1(t) \\ \phi_2(t) \end{pmatrix} \end{equation*}
is a particular solution of (3.2.6), show that
\begin{equation*} {\mathbf x}_h + {\mathbf x}_p = \begin{pmatrix} x(t) + \phi_1(t) \\ y(t) + \phi_2(t) \end{pmatrix} \end{equation*}
is the general solution to the system. Thus, to solve a nonhomogeneous system of linear differential equations, we need to find the solution of the corresponding homogeneous system and one particular solution of the nonhomogeneous system.
10
Consider the linear system
\begin{align*} x' & = x + 3y + (t - 3t^2)\\ y' & = x - y + (2 - t + t^2)\\ x(0) & = 1\\ y(0) & = -1. \end{align*}
- Find the general solution of the homogeneous system
\begin{align*} x' & = x + 3y\\ y' & = x - y \end{align*}
- Find a particular solution for
\begin{align*} x' & = x + 3y + (t - 3t^2)\\ y' & = x - y + (2 - t + t^2) \end{align*}
- Find the solution of
\begin{align*} x' & = x + 3y + (t - 3t^2)\\ y' & = x - y + (2 - t + t^2)\\ x(0) & = 1\\ y(0) & = -1. \end{align*}
Hint
Assume that your solution must be of the form
\begin{equation*} {\mathbf x}_p = \begin{pmatrix} a_2 t^2 + a_1 t + a_0 \\ b_2 t^2 + b_1 t + b_0. \end{pmatrix} \end{equation*}
This is called the method of undetermined coefficients.
11
Consider the system
\begin{align*} x' & = ax + y\\ y' & = 2ax + 2y, \end{align*}
where \(a \in {\mathbb R}\text{.}\) For what values of \(a\) do you find a bifurcation (a change in the type of phase portrait)? Sketch typical phase portraits for a values of \(a\) above and below the bifurcation point.
12
Prove that
\begin{equation*} \alpha e^{\lambda t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{\lambda t} \begin{pmatrix} t \\ 1 \end{pmatrix} \end{equation*}
is the general solution of
\begin{equation*} {\mathbf x}' = \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix} {\mathbf x}. \end{equation*}
Subsection 3.2.5 Project
¶Planar Systems of Differential Equations Solutions
Source: http://faculty.sfasu.edu/judsontw/ode/html-20180819/linear02.html