Backward error of the LU decomposition and pivoting
Contents
Backward error of the LU decomposition and pivoting#
The backward error of the LU decomposition is a fascinating topic. Despite the LU decomposition having been well known for decades there are still many open questions regarding the backward error.
First of all, the LU decomposition itself is not backward stable and we will see examples of why this is not the case. Indeed, LU decomposition can catastrophically increase the backward error.
We will then learn about a small modification to the LU decomposition, called pivoting, which makes the algorithm stable in almost all situations. One can still construct matrices for which the improved algorithm is not backward stable. But these are quite pathological and do not tend to be relevant in practice.
A first backward error result#
The following theorem provides a backward error result for the basic LU decomposition. We will give the statement and then discuss its consequences. We will not provide the proof here.
Backward error of the LU decomposition#
Let \(A = LU\) be the LU decomposition of a nonsingular matrix \(A\in\mathbb{R}^{n\times n}\)., and \(\tilde{L}, \tilde{U}\) the computed LU factors by Gaussian elimination in floating point arithmetic. The computed factors \(\tilde{L}\) and \(\tilde{U}\) satisfy
for some \(\Delta A\in\mathbb{R}^{n\times n}\) with
We might naively think that this shows the LU decomposition is backward stable. However, for a backward stability result we would require that \(\|\Delta A\| / \|A\| = \mathcal{O}(\epsilon_{mach})\). But instead we have \(\|L\|\cdot \|U\|\) in the denominator.
So the question of backward stability of the \(LU\) decomposition is reduced to the problem of when we have that
Failure of backward stability#
Let
We have
In this example \(\|A\|\approx 1\) (depending on the norm) but \(\|L\|\cdot \|U\|\approx \mathcal{O}(\epsilon^{-2})\).
The norms of \(L\) and \(U\) can differe wildly from the norm of \(A\). But there is an easy fix in this example. We can swap the first and second row of \(A\) to obtain
We now have \(\|L\|\cdot \|U\|\approx 1\), simply by swapping the two rows of the matrix.
This motivates the following modification of the LU decomposition, called column pivoted (or simply pivoted) LU.
LU decomposition with pivoting#
The idea is simple. Consider again the \(3\times 3\) matrix
Assume that \(a_{31}\) is the element with the largest magnitude in the first column. We then exchange the third and the first row to get
with
We now proceed with the LU decomposition as normal to zero out the elements in the first column. We then obtain a matrix with the structure
Before we proceed with the LU decomposition we swap the second and third row if the element at position \((3, 2)\) is larger by magnitude than the entry at position at \((2, 2)\).
This pivoting strategy guarantees that we have \(|\ell_{i, j}|\leq 1\) for all \(i, j\). However, we now have a decomposition not of \(A\) but of \(PA\) in the form
where \(P\) is a permutation matrix that swaps the rows of \(A\) around. Note that we do not \(P\) in advance. The correct permutation of the rows of \(A\) is determined by progressing through the LU decomposition.
Growth factors#
Let us consider how pivoting influences the size of the factors in the LU decomposition. As already stated we have \(\ell_{i, j}| \leq 1\) with pivoting. Hence, \(\|L\| \approx 1\) and
(all up to small norm and dimension dependent factors)
To measure how \(\|U\|\) grows the following definition of the growth factor \(\rho\) is useful.
Hence, we have that
We can now go back to our original backward error result and obtain that the computed LU factors satisfy
with
Note that the notation \(\mathcal{O}(\rho\epsilon_{mach})\) is a bit misleading since the \(O\)-notation absorbs constants. Nevertheless, this is done to emphasise the importance of the factor \(\rho\).
It follows that the LU decomposition with pivoting is backward stable if \(\rho\) is small.
For most matrices this is the case but not for all matrices, and we will get to know examples where \(\rho\) can grow exponentially but also see that this almost never happens in practice.