jagomart
digital resources
picture1_Matrix Pdf 173014 | Lecture15


 180x       Filetype PDF       File size 0.14 MB       Source: www.math.usm.edu


File: Matrix Pdf 173014 | Lecture15
jim lambers mat461 561 spring semester 2009 10 lecture 15 notes these notes correspond to section 6 6 in the text special types of matrices the solution of a system ...

icon picture PDF Filetype PDF | Posted on 27 Jan 2023 | 2 years ago
Partial capture of text on file.
                                           Jim Lambers
                                           MAT461/561
                                      Spring Semester 2009-10
                                          Lecture 15 Notes
            These notes correspond to Section 6.6 in the text.
            Special Types of Matrices
            The solution of a system of linear equations Ax = b can be obtained using Gaussian elimination
            with pivoting in conjunction with back substitution for any nonsingular matrix A. However, there
            are several classes of matrices for which modifications of this process are more appropriate. We
            will now examine some of these classes.
            Diagonally Dominant Matrices
            Amatrix A is strictly diagonally dominant if each diagonal element aii is larger in magnitude than
            the sum of the magnitudes of all other elements in the ith row. That is,
                                      n
                                     X ∣a ∣<∣a ∣, i=1,2,...,n.
                                          ij    ii
                                    j=1,j∕=i
            Wealso say that A is strictly row diagonally dominant.
               It can be shown that if A is strictly diagonally dominant, then
               ∙ A is nonsingular.
               ∙ Gaussian elimination can be applied to A without performing any row interchanges.
            This is useful for ensuring that there is a unique solution of the system of linear equations that
            must be solved to obtain the coefficients of a cubic spline, as such a system is strictly diagonally
            dominant.
            Symmetric Positive Definite Matrices
            Areal, n×n matrix A is positive definite if, for any nonzero vector x,
                                             xTAx>0.
            Manyapplications involve symmetric positive definite matrices, which are positive definite and also
                                T
            symmetric; that is, A = A . A positive definite matrix is the generalization to n×n matrices of a
            positive number.
               If A is symmetric positive definite, then it has the following properties:
                                                 1
                                   ∙ A is nonsingular.
                                   ∙ All of the diagonal elements of A are positive.
                                   ∙ The largest element of the matrix lies on the diagonal.
                                   ∙ All of the eigenvalues of A are positive.
                                   ∙ det(A) > 0.
                             It is worth noting that the first two properties hold even if A is not symmetric.
                                    In general it is not easy to determine whether a given n×n symmetric matrix A is also positive
                             definite. One approach is to check the matrices
                                                                                       ⎡ a            a         ⋅ ⋅ ⋅    a       ⎤
                                                                                              11        12                 1k
                                                                                       ⎢ a            a         ⋅ ⋅ ⋅    a       ⎥
                                                                            A =⎢ 21                     22                 2k ⎥,          k = 1,2,...,n.
                                                                               k       ⎢ .              .                  .     ⎥
                                                                                       ⎣ .              .                  .     ⎦
                                                                                              .         .                  .
                                                                                            a         a         ⋅ ⋅ ⋅    a
                                                                                              k1        k2                 kk
                             These matrices are called the leading principal submatrices of A, or the principal minors. It can be
                             shown that A is positive definite if and only if det(A ) > 0 for k = 1,2,...,n.
                                                                                                                                k
                                    One desirable property of symmetric positive definite matrices is that Gaussian elimination
                             can be performed on them without pivoting, and all pivot elements are positive. Furthermore,
                             Gaussianelimination applied to such matrices is robust with respect to the accumulation of roundoff
                             error. However, Gaussian elimination is not the most practical approach to solving systems of
                             linear equations involving symmetric positive definite matrices, because it is not the most efficient
                             approach in terms of the number of floating-point operations that are required.
                                    Instead, it is preferable to compute the Cholesky factorization of A,
                                                                                                                A=LLT,
                             where L is a lower triangular matrix with positive diagonal entries. Because A is factored into two
                             matrices that are the transpose of one another, the process of computing the Cholesky factorization
                             requires about half as many operations as the LU decomposition.
                                    The algorithm for computing the Cholesky factorization can be derived by matching entries of
                             the product LLT with those of A. This yields the following relation between the entries of L and
                             A,
                                                                                              k
                                                                                a =Xℓijℓ , i,k=1,2,...,n, i≥k.
                                                                                   ik                    kj
                                                                                            j=1
                             From this relation, we obtain the following algorithm.
                                                                                                                        2
                for j = 1,2,...,n do
                           √
                     ℓ   = a
                      jj       jj
                     for i = j + 1,j + 2,...,n do
                          ℓ  =a /ℓ
                           ij    ij  jj
                          for k = j,j +1,...,i do
                               a =a −ℓijℓ
                                ik    ik       kj
                          end
                     end
                end
                If A is not symmetric positive definite, then the algorithm will break down, because it will attempt
                to compute ℓjj, for some j, by taking the square root of a negative number, or divide by a zero ℓjj.
                In fact, due to the expense involved in computing determinants, the Cholesky factorization is also
                an efficient method for checking whether a symmetric matrix is also positive definite.
                    Once the Cholesky factor L of A is computed, a system Ax = b can be solved by first solving
                Ly=bbyforward substitution, and then solving LTx = y by back substitution.
                                                                                                        T
                    An alternative factorization of a symmetric positive definite matrix is the LDL        factorization
                                                                        T
                                                              A=LDL ,
                where L is unit lower triangular, and D is a diagonal matrix whose diagonal entries are positive.
                Thecomputation of the factors L and D is very similar to the algorithm for the LU decomposition,
                with the following exceptions:
                    ∙ During elimination, only elements in the lower triangle of A, that is, entries a     where i ≥ j,
                                                                                                        ij
                      need to be updated, due to the symmetry of A.
                    ∙ Before eliminating elements in column j, the jth diagonal element, d , is assigned the value
                                                                                               jj
                          (j)                                                    (j)
                      of ajj ; that is, djj is set equal to the (j,j) element of A  .
                           T
                The LDL factorization is also known as the “square-root-free Cholesky factorization”, since it
                computes factors that are similar in structure to the Cholesky factors, but without computing any
                                                  T                                                     1/2
                square roots. In fact, if A = GG     is the Cholesky factorization of A, then G = LD       .
                Banded Matrices
                An n×n matrix A is said to have upper bandwidth p if a          =0whenever j ≥ i+p. Similarly, A
                                                                              ij
                has lower bandwidth q if a     =0whenever i ≥ j +q. A matrix that has upper bandwidth p and
                                            ij
                lower bandwidth q is said to have bandwidth w = p+q +1.
                    Any n×n matrix A has a bandwidth w ≤ 2n−1. If w < 2n−1, then A is said to be banded.
                However, cases in which the bandwidth is O(1), such as when A is a tridiagonal matrix for which
                p = q = 1, are of particular interest because for such matrices, Gaussian elimination, forward
                substitution and back substitution are much more efficient. This is because
                                                                   3
                   ∙ If A has lower bandwidth q, and A = LU is the LU decomposition of A (without pivoting),
                      then L has lower bandwidth q, because at most q elements per column need to be eliminated.
                   ∙ If A has upper bandwidth p, and A = LU is the LU decomposition of A (without pivoting),
                      then U has upper bandwidth p, because at most p elements per row need to be updated by
                      row operations.
                It follows that if A has O(1) bandwidth, then Gaussian elimination, forward substitution and back
                substitution all require only O(n) operations each, provided no pivoting is required.
                   When a matrix A is banded with bandwidth w, it is wasteful to store it in the traditional
                2-dimensional array. Instead, it is much more efficient to store the elements of A in w vectors of
                length at most n. Then, the algorithms for Gaussian elimination, forward substitution and back
                substitution can be modified appropriately to work with components of these vectors instead of
                entries of a matrix. For example, to perform Gaussian elimination on a tridigonal matrix, we can
                proceed as in the following algorithm. We assume that the main diagonal of A is stored in the
                vector a, the subdiagonal (entries a     ) is stored in the vector l, and the superdiagonal (entries
                                                    j+1,j
                a    ) is stored in the vector u.
                 j,j+1
                for j = 1,2,...,n−1 do
                    lj = lj/aj
                    a     =a     −l u
                      j+1    j+1    j j
                end
                   Then, we can use these updated vectors to solve the system Ax = b using forward and back
                subtitution as follows:
                y =b
                 1    1
                for i = 2,3,...,n do
                    y =b −l       y
                end i     i    i−1 i−1
                x =y /a
                 n     n  n
                for i = n−1,n−2,...,1 do
                    x =(y −ux )/a
                      i     i   i  i+1   i
                end
                Note that after Gaussian elimination, the components of the vector l are the subdiagonal entries of
                Linthe LU decomposition of A, and the components of the vector u are the superdiagonal entries
                of U.
                                                                 4
The words contained in this file might help you see if this file matches what you are looking for:

...Jim lambers mat spring semester lecture notes these correspond to section in the text special types of matrices solution a system linear equations ax b can be obtained using gaussian elimination with pivoting conjunction back substitution for any nonsingular matrix however there are several classes which modications this process more appropriate we will now examine some diagonally dominant amatrix is strictly if each diagonal element aii larger magnitude than sum magnitudes all other elements ith row that n x manyapplications involve symmetric positive denite and also t generalization number then it has following properties largest lies on eigenvalues det worth noting rst two hold even not general easy determine whether given one approach check k kk called leading principal submatrices or minors shown only desirable property performed them without pivot furthermore gaussianelimination applied such robust respect accumulation roundo error most practical solving systems involving because...

no reviews yet
Please Login to review.