Solving A System of Linear Equations#
Consider the system:
We will solve this in steps.
Use the first equation to eliminate \(x\) in the second two equations, resulting in:
\[\begin{alignat*}{4} x &+ y &+ z &= 6 \\ &+ 3y &+ z &= 9 \\ &- 2y &- z &= -7 \end{alignat*}\]Use the second equation to eliminate \(y\) in the final equation, resulting in:
\[\begin{alignat*}{4} x &+ y &+ z &= 6 \\ &+ 3y &+ z &= 9 \\ & &- \tfrac{1}{3}z &= -1 \end{alignat*}\]At this point, we completed forward elimination — the last equation has a single unknown, \(z\), and once we solve for it, we can substitute the value of \(z\) in the equation above, leaving it with a single unknown, \(y\), and so forth.
Now we do back substitution.
From the last equation, we see \(z = 3\). Putting this into the equation above, we get \(y = 2\). Putting both of these into the first equation, we finally find \(x = 1\).
The process we just did is the basis of Gaussian elimination.
Note
The only source of error in the procedure we did would be round-off error. There is no truncation error.
Note
You may have learned Cramer’s rule for solving a linear system in school. But for practical purposes, this method is usually much more computationally expensive than Gaussian elimination.
Matrix form#
We can redo this same procedure in matrix form, starting with writing the system as:
with
and
We will write an augmented matrix \(({\bf A}|{\bf b})\) as
This allows us to easily do the same steps to both \({\bf A}\) and \({\bf b}\) together.
Doing the same sequence of steps as above, the augmented matrix looks as:
At this point, the matrix is in row echelon, and it clearly shows the structure needed for back-substitution.
Pivoting#
The above case worked well because all of the matrix elements were roughly the same magnitude. Consider the same system, with the first term in the first equation multiplied by a small number \(\epsilon\):
We can do the same procedure as before — use the first equation to eliminate \(x\) in the other two equations. We start by rewriting the first equation as:
then adding or subtracting this from the other two, our system becomes:
Now, as \(\epsilon \rightarrow 0\), roundoff error means that
so, we can rewrite the system approximating the roundoff as:
But now notice that the last equation is just \(-2\times\) the second equation — our system has become singular, and no longer has a solution!
The problem arose because we started out the Gaussian elimination with an equation that had a very small coefficient on \(x\). But we could reorder the equations, swapping the first and second, and then we would no longer have this problem. This is called partial pivoting.
In terms of matrix form, partial pivoting means swapping rows of the matrix such that the element in the column we are eliminating from the rows below has the largest absolute magnitude.
Let’s redo this using pivoting. To make the notation more compact, we’ll use the augmented matrix form:
now we pivot, swapping the first and last rows, sinc the last row has the largest magnitude of any entry in the first column:
now we do forward elimination, eliminating all the non-zero entries in the first column beneath row 1:
next we can do the rounding, as \(\epsilon \rightarrow 0\)
Note now that this is not singular. We can continue with forward elimination, removing all non-zero entries in the second column beneath row 2:
Now we can do back substitution. From the last row, we can read off
and then putting this into the second row’s linear equation, we get
or
then moving up to the first row, we have:
so
We see that pivoting made this system solvable even in the presence of roundoff.
A further refinement on this is scaled pivoting which is when we first scale each row by its largest element, and then we consider which row to pivot with.