Matrix Factorization
Last updated
Last updated
Gaussian elimination involves three types of elementary row operations:
Swapping two rows (also known as a row exchange),
Multiplying a row by a nonzero number,
Adding a multiple of one row to another row.
Gaussian elimination reduces A to row echelon form in general, whereas #4 reduces A to the reduced row echelon form (rref), which further requires 1) the leading entry (pivot) in each nonzero row is 1, and 2) each column containing a pivot has all other entries equal to zero.
#1, #2 and #3 are three variants of LU decomposition.
#1 and #2 are the simple/basic forms of LU decomposition, which use only the second and third types of row operations in Gaussian elimination. No row exchanges.
#3 is known as the LU decomposition with partial pivoting (or PLU decomposition), where row exchanges are necessary and represented by a permutation matrix P.
There exists the fourth variant of LU decomposition, where column permutations are also necessary and represented by a permutation matrix Q. The decomposition becomes PAQ = LU, and is known as LU factorization with full pivoting.
It is typically required that A be a square matrix, but it is possible to generalize LU decomposition to rectangular matrices.
LU decomposition can be viewed as the matrix form of Gaussian elimination. It is often used to solve square systems of linear equations, and serves as a key step when inverting a matrix or computing the determinant of a matrix.
#5 is the standard form of Cholesky decomposition, which is a special case of LU decomposition where matrices are symmetric and positive-definite.
This is also known as LLT decomposition, since we can replace the notation C with L and denote . This expression is not recommended as it can confuse the reader on the meaning of notation L.
In practice, we often perform(as mentioned in #2) to avoid extracting square roots as required in . This is often known as LDL decomposition, LDLT decomposition, or square-root-free Cholesky decomposition.
If matrices are not positive-definite, LDLT decomposition can still be performed, but requires pivoting (as discussed previously in LU decomposition) to ensure numerical stability. The decomposition becomes.
#6 is the standard form of QR decomposition. The only requirement is that A has independent columns.
There are three implementations of QR decomposition: Gram–Schmidt process, Givens rotation matrices, Householder transformation (reflection operation).
If pivoting is necessary (for numerical stability), the decomposition becomes , where P and P' are permutation matrices for row and column, respectively.
QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.
#7 is the standard form of Eigendecomposition. The requirement is that A is diagonalizable or non-defective, or equivalently A is a square matrix with n linearly independent eigenvectors.
Diagonalizability of A depends on enough eigenvectors, whereas the invertibility of A depends on nonzero eigenvalues. There is no connection between the two.
Diagonalization can fail only if there are repeated eigenvalues. In this case, some of the eigenvectors may be linearly dependent on others.
#8 is a special case of Eigendecomposition, where A is a real symmetric matrix.
When the matrix being factorized is a normal or real symmetric matrix, the decomposition is also known as spectral decomposition, derived from the spectral theorem.
Every real symmetric matrix is a normal matrix.
Useful facts:
The product of the eigenvalues is equal to the determinant of A.
The sum of the eigenvalues is equal to the trace of A.
The eigenvectors of A^-1 are the same as the eigenvectors of A.
The statement "A can be eigendecomposed" does not imply that A has an inverse. A can be inverted if and only if all eigenvalues are nonzero.
The statement "A has an inverse" does not imply that A can be eigendecomposed. The counterexample is [1 1; 0 1], which is an invertible defective matrix.
#10 is the standard form of Singular Value Decomposition (SVD), which generalizes the eigendecomposition of a square matrix to any m-by-n matrix.
Intuitively, the SVD decomposition breaks down a linear transformation of R^n into a composition of three geometrical transformations: a rotation or reflection (V), followed by a coordinate-by-coordinate scaling (Sigma), followed by another rotation or reflection (U).
Examples on the relation between SVD and eigendecomposition.
For a real symmetric square matrix, the singular values are the absolute values of the eigenvalues.
For matrix A = [1 0 1; 0 1 1; 0 0 0], the eigenvalues of A are 1, 1, 0, whereas the singular values of A are sqrt(3), 1, 0.
For matrix A = [1 1; 0 0], the eigenvalues are 1 and 0; the singular values are sqrt(2) and 0.
#12 is the standard form of Polar decomposition.
In general, the polar decomposition of a square matrix A always exists. If A is invertible, the decomposition is unique, and the factor H will be positive-definite.
Intuitively, if a real n-by-n matrix A is interpreted as a linear transformation of n-dimensional space R^n, the polar decomposition separates it into a rotation or reflection U of R^n, and a scaling of the space along a set of n orthogonal axes.
This decomposition is useful in computing the fundamental group of (matrix) Lie groups.
Decomposition | Requirements | Speed (small-to-medium) | Speed (large) | Accuracy |
---|---|---|---|---|
Positive definite | +++ | +++ | + | |
Positive or negative semidefinite | +++ | + | ++ | |
Invertible | ++ | ++ | + | |
None | - | - - | +++ | |
None | ++ | ++ | + | |
None | + | - | +++ | |
None | - | - - | +++ | |
None | + | - | +++ | |
None | - | - | +++ | |
None | - | - - - | +++ |
Appendix C in the textbook "Linear algebra and its applications", Gilbert Strang