LLDU factorization
Gennadi Malaschonok (cid:63)
National University of Kyiv-Mohyla Academy,2 Skovorody vul., Kyiv 04070, Ukraine [email protected]
Abstract.
LU-factorization of matrices is one of the fundamental al-gorithms of linear algebra. The widespread use of supercomputers withdistributed memory requires a review of traditional algorithms, whichwere based on the common memory of a computer. Matrix block re-cursive algorithms are a class of algorithms that provide coarse-grainedparallelization. The block recursive LU factorization algorithm was ob-tained in 2010. This algorithm is called LEU-factorization. It, like thetraditional LU-algorithm, is designed for matrices over number fields.However, it does not solve the problem of numerical instability. We pro-pose a generalization of the LEU algorithm to the case of a commutativedomain and its field of quotients. This LDU factorization algorithm de-composes the matrix over the commutative domain into a product ofthree matrices, in which the matrices L and U belong to the commuta-tive domain, and the elements of the weighted truncated permutationmatrix D are the elements inverse to the product of some pair of minors.All elements are calculated without errors, so the problem of instabilitydoes not arise.
Introduction
The representation of the matrix A in the form of two factors A = LU , where Lis the lower triangular matrix and U is the upper triangular matrix, is called theLU decomposition (or LU factorization). This decomposition is considered as thebasic algorithm for any library of linear algebra programs [1]. Many algorithmsare built on its basis, including solving linear systems, matrix inversion, rankcalculation, etc.With the advent of supercomputers, it became possible to increase the sizeof matrices in solving applied problems. At the same time, shortcomings of theknown matrix algorithms appeared. It became clear that they cannot be appliedto large matrices. Problems such as accumulation of rounding errors, loss ofaccuracy, poor concurrency, loss of sparseness of matrices, high computationalcomplexity, and other problems began to appear (see, for example, [2]).In 2010, the LEU factorization algorithm was obtained. This is an algorithmthat applies to matrices over number fields. It allowed to overcome many of these (cid:63) The work was partially supported by the Academy of Sciences of Ukraine, project(2020) ”Creation of an open data e-platform for the collective use centers”. a r X i v : . [ c s . S C ] N ov Gennadi Malaschonok shortcomings for finite number fields. It was the first block recursive algorithmwith the complexity of matrix multiplication, sparse and highly parallelistic [3].However, the problem of loss of accuracy, in the case of classical fields, re-mained as before impossible to overcome. It was required to obtain a generaliza-tion of LEU factorization to commutative domains and their fields of quotients.Such an algorithm will allow, for example, to use integers in the calculations in-stead of approximate rational numbers. Two approaches were proposed to createsuch an LDU factorization algorithm [5], [6]. The present work continues andcompletes these studies. We propose the complete dichotomous recursive LDUfactorization algorithm for the commutative domain and give its proof.
Let R be a commutative domain, F its field of quotients. Let A ∈ R n × n be amatrix that has rank r , r ≤ n . We want to get matrices L, U ∈ R n × n of rank n ,( L is low triangular, U is upper triangular), matrix D , that has rank r , with r non-zero elements equal ( det ) − , ( det det ) − , .., ( det r − det r ) − , such that A = LDU.
We denote by det r the r × r nonzero minor of the matrix A , whose position isdetermined by r nonzero rows and columns of the matrix D . The determinantsof successively nested nondegenerate submatrices of orders r − , r − , .., det r − , det r − , .., det , det , respectively.To solve this problem, we formulate a more general problem, but first givesome necessary definitions. The diagram shows the structure of the semigroup of truncated weightedpermutations S wp :At the center of the diagram you can see a permutation group G p . Between S wp and G p there are two more subalgebras: S p and G wp .If we replace elements with values of 1 in the matrices from the permutationgroup G p by arbitrary nonzero elements, then we obtain the group of weightedpermutations G wp .If, on the contrary, in the matrices from the permutation group G p we re-place some elements with values 1 with zero elements, we obtain a semigroup oftruncated permutations S p .The semigroup of truncated weighted permutations S wp is obtained from thegroup G wp if we replace some nonzero elements with zeros.If we select all diagonal matrices in the semigroup S wp , then we obtain asemigroup of (weighted) diagonal matrices S wd . Two other subalgebras S d and DU factorization 3
Fig. 1.
The structure of the truncated weighted permutations semigroup S wp G wd are embedded in it. The semigroup S d is formed by diagonal matrices forwhich only 1 and 0 can stand on a diagonal. The group G wd is formed by thosediagonal matrices from S wp for which there are no zero elements on the diagonal.The identity group I that contains one identity matrix closes this construction. For matrices from the semigroup S wp we introduce two mappings: unit and extended .A homomorphism of the multiplicative groups F ∗ → R ∗ →
1) inducesa homomorphism of the corresponding subalgebras: S wp → S p , S wd → S (cid:98) D , G wp → G p , G wd → I . All nonzero elements of the matrix are replaced by unitelements. On the diagram, they correspond to arrows that are directed downand to the left. Definition 1.
The mapping of the matrix D ∈ S wp ( F ) induced by the homo-morphism F ∗ → : D ∈ S wp → D → ∈ S p is called unit mapping. The unit homomorphism of semigroups can be represented by the followingcommutative diagram: (
A, B ) → −−−−→ ( A → , B → ) × ↓ × ↓ C → −−−−→ C → Definition 2.
The matrix mapping D ∈ S wp → D Ext ∈ G wp in which everyblock at the intersection of zero rows and zero columns is replaced by a unit blockcalled extended mapping and is indicated by a ”Ext” upper index. On the diagram, the extended mapping corresponds to 4 arrows that are directeddown and to the right: S wp → G wp , S wd → G wd , S p → G p , S (cid:98) D → I . As a resultof such a transformation, a matrix of full rank is obtained. Gennadi Malaschonok
Definition 3.
The mapping of the matrix D ∈ S wp → ¯ D = D Ext − D ∈ S p iscalled the complementary mapping and is denoted by a “bar”.Property 1. The special case of the complementary mapping, when the matrix D belongs to the semigroup S p , is an involution on the semigroup S p . Involutionis reversible: ¯¯ D = D . Property 2. ∀ D ∈ S p : D + ¯ D ∈ G p .Examples of such involution: → , → . Hereinafter, we consider matrices over the commutative domain R . Definition 4.
Let a matrix M be given and let A be its square submatrix locatedin the upper left corner. Any submatrix G , that is obtained by adding to the block A some row r and some column c of matrix M G = (cid:18) A cr ω (cid:19) . (1) is called the submatrix that surrounds matrix A . Theorem 1.
Let A be a square matrix, A ∗ the adjoint matrix for A , det ( A ) (cid:54) = 0 , G the surrounding matrix for A (1), then det ( G ) = det ( A ) ω − rA ∗ c. Proof.
This equality expresses the decomposition of the determinant G in row r and column c . Theorem 2.
Let a matrix M be divided into blocks M = (cid:18) A k BC D (cid:19) , A k is a square block of size k × k , its determinant det k is non-zero and A ∗ k is anadjoint matrix for A k , then the elements of the matrix A = ( det k ) D − CA ∗ k B are the minors that surround the A k block. Proof.
To prove the theorem, it suffices to apply Theorem 1 to each element ofthe matrix A . DU factorization 5
Let R be a commutative domain, F its field of quotients. Let a matrix M ∈ R N × N ( N = 2 ν ) be given, and let α be the determinant of the largest nonde-generate (or empty) submatrix located in the upper left corner of the matrix M .For the case of an empty submatrix, we set α = 1.Let n = 2 p , A ∈ R n × n be a matrix of rank r , r ≤ n , and elements of thematrix A be surrounding minors with respect to the minor α . In the case of anempty submatrix, we can take A = M .We want to obtain matrices L, U, M, W ∈ R n × n of rank n , (L lower trian-gular, U - upper triangular), a matrix D ∈ S wp ( F ), of rank r , with r non-zeroelements equal ( α det ) − , ( det det ) − , .., ( det r − det r ) − , such that αLDU = AL (cid:98) DM = I W (cid:98) DU = I . (1)We denote by det r the r × r nonzero minor of the matrix A , whose position isdetermined by r nonzero rows and columns of the matrix D . The determinantsof successively nested nondegenerate submatrices of orders r − , r − , .., , det r − , det r − , .., det , det , respectively.We denote by (cid:98) D a matrix (cid:98) D = det − r ( αD ) Ext = αdet − r D + det − r ¯ D, and we denote E = D → , I = EE T , J = E T E, ¯ I = I − I, ¯ J = I − J, (2) E ∈ S p , I, J ∈ S d , I is the unit matrix.It should be noted that the matrices I and D have the same nonzero rows,and the matrices J and D have the same nonzero columns: IDJ = D, ¯ ID = 0 , D ¯ J = 0 . We define the properties of the matrices L and U as follows: L ¯ I = ¯ I, ¯ JU = ¯ J. (3) We want to describe a procedure that allows you to compute the LDU-factorization(
L, D, U, M, (cid:98)
D, W, α r ) = LDU ( A, α )in a recursive and dichotomous way.
Gennadi Malaschonok (1) If A = 0, then we assume that D = I = J = 0, M = W = α I , (cid:98) D = α − I , L = U = ¯ I = ¯ J = ¯ D = I , α r = α .(2) If n = 1 ( A = [ a ] , a (cid:54) = 0), then we assume that D = [( αa ) − ], (cid:98) D = [ a − ], L = U = M = W = [ a ], I = J = [1], ¯ I = ¯ J = ¯ D = [0], α r = a .(3) For A (cid:54) = 0 and n > A into four equal blocks A = (cid:18) A A A A (cid:19) . (4)We can do the LDU-decomposition for block A : αL D U = A L (cid:98) D M = IW (cid:98) D U = I . (5)Let k = rank ( A ) and det , det , .., det k are the non-zero nested leading minorsof A , that were found recursively, α k = det k , (cid:98) D = α − k ( αD + ¯ D ) and thenon-zero elements of D equal ( αdet ) − , ( det det ) − ... ( det k − α k ) − .Then we can write the equality (cid:18) A A A A (cid:19) = (cid:18) L I (cid:19) (cid:18) αD
11 1 α k A (cid:48) α k A (cid:48) A (cid:19) (cid:18) U I (cid:19) . (6)We denote the new blocks: A (cid:48) = α k (cid:98) D M A , A (cid:48) = α k A W (cid:98) D . (7)The middle matrix can be decomposed as follows (cid:18) αD
11 1 α k A (cid:48) α k A (cid:48) A (cid:19) = (cid:18) I α A (cid:48) D +11 I (cid:19) (cid:18) αD αα k A (cid:48)(cid:48) αα k A (cid:48)(cid:48)
21 1 α k A (cid:48) (cid:19) (cid:18) I α D +11 A (cid:48) I (cid:19) . (8)We denote E = D → , I = E E T , J = E T E . To denote the generalized inverse matrix, we use the superscript plus. Notethat for any matrix ( a i,j ) ∈ S wp the generalized inverse matrix coincides withthe pseudoinverse matrix:( a i,j ) + = { ( b i,j ) : b i,j = 0 if a j,i = 0 , b i,j = a − j,i if a j,i (cid:54) = 0 } . A pseudoinverse matrix is obtained by transposing a given matrix and replacingall nonzero elements with inverse elements.As far as ¯ I (cid:98) D = ¯ D = (cid:98) D ¯ J , D +11 ¯ I = 0 and D +11 (cid:98) D = J we get A (cid:48)(cid:48) = 1( αα k ) ¯ I A (cid:48) = 1 α ¯ D M A , A (cid:48)(cid:48) = 1( αα k ) A (cid:48) ¯ J = 1 α A W ¯ D . (9) DU factorization 7
For the lower right block we get αα k A = ( A (cid:48) J + A (cid:48)(cid:48) ) D +11 A (cid:48) + A (cid:48) D +11 A (cid:48)(cid:48) + αα k A (cid:48) A (cid:48) = 1 αα k ( αα k A − A (cid:48) D +11 A (cid:48) ) . (10)Matrices A (cid:48)(cid:48) and A (cid:48)(cid:48) are matrices of surrounding minors with respect tominor α k . See the prove in Theorem 4.Let α k L D U = A (cid:48)(cid:48) L d M = IW d U = I and α k L D U = A (cid:48)(cid:48) L d M = IW d U = I be LDU decomposition of blocks A (cid:48)(cid:48) and A (cid:48)(cid:48) .Let A (cid:48) be submatrix of A which is fixed with non zero rows of matrix diag ( I , I , I ) and non zero columns of matrix diag ( J , J , J ). Subma-trices A (cid:48)(cid:48) , A (cid:48)(cid:48) and the submatrix, which corresponds to minor α k , do not havecommon nonzero rows and columns, so the sequence of nested non zero minorsof submatrix A (cid:48) can be selected in different ways.Let rank ( A (cid:48)(cid:48) ) = l and rank ( A (cid:48)(cid:48) ) = m , then the rank of submatrix A (cid:48) isequal s = k + m + l . Suppose that we have obtained the following sequencesof nested minors: det , ..., det k , , ..., det l , ( l = l + k ), for the block ( A , A ) T and det , ..., det k , det (cid:48) k +1 ..., det (cid:48) m , m = m + k , for the block ( A , A ).For the matrix A (cid:48) we can set the following sequences of nested minors: det , ..., det k , ..., det l , det l +1 , , ..., det s , with det l + i = λdet (cid:48) k + i , i = 1 , , .., m , ( λ = det l /det k ) , in particular, det s = λdet (cid:48) m .We denote α l = det l , α s = det s , J λ = λJ + ¯ J , I λ = λI + ¯ I , L ∼ = L I λ , U ∼ = J λ U , D ∼ = λ − D , (11) (cid:98) D ∼ = λ − I λ − (cid:98) D , M ∼ = λM , W ∼ = λW , (12)Then the last system can be written as follows: ( λα k ) L ∼ D ∼ U ∼ = λA (cid:48)(cid:48) L ∼ (cid:98) D ∼ M ∼ = IW ∼ (cid:98) D ∼ U ∼ = I .
So we can write the following matrix equation: (cid:18) αD αα k A (cid:48)(cid:48) αα k A (cid:48)(cid:48)
21 1 α k A (cid:48) (cid:19) = (cid:18) L ∼ L (cid:19) (cid:18) αD αD ∼ αD
21 1 α k A (cid:48)(cid:48) (cid:19) (cid:18) U U ∼ (cid:19) , (13)where we denote A (cid:48)(cid:48) = (cid:98) D M A (cid:48) W ∼ (cid:98) D ∼ = (cid:98) D M A (cid:48) W I λ − (cid:98) D (14) Gennadi Malaschonok and use the following equation: I I = 0, J J = 0 and L D U = D . Tocheck the last equation we can write each matrices as follows: L = L I + ¯ I , D = I D J , U = J U + ¯ J .The middle matrix can be decomposed in two ways: (cid:18) αD αD ∼ αD
21 1 α k A (cid:48)(cid:48) (cid:19) = (cid:18) I αα k A (cid:48)(cid:48) D ∼ +12 I (cid:19) (cid:18) αD αD ∼ αD αα s A (cid:48)(cid:48)(cid:48) (cid:19) (cid:18) I αα k D +21 A (cid:48)(cid:48) ¯ J I (cid:19) or (cid:18) αD αD ∼ αD
21 1 α k A (cid:48)(cid:48) (cid:19) = (cid:18) I αα k ¯ I A (cid:48)(cid:48) D ∼ +12 I (cid:19) (cid:18) αD αD ∼ αD αα s A (cid:48)(cid:48)(cid:48) (cid:19) (cid:18) I αα k D +21 A (cid:48)(cid:48) I (cid:19) . (15)We use the following equations D +12 D = 0 , D D +21 = 0 , and for the lowerright block we get1 α k A (cid:48)(cid:48) = 1 α k A (cid:48)(cid:48) J + 1 α k I A (cid:48)(cid:48) ¯ J + αα s A (cid:48)(cid:48)(cid:48) , or 1 α k A (cid:48)(cid:48) = 1 α k I A (cid:48)(cid:48) + 1 α k ¯ I A (cid:48)(cid:48) J + αα s A (cid:48)(cid:48)(cid:48) . Both of these expressions give the same value for A (cid:48)(cid:48)(cid:48) : A (cid:48)(cid:48)(cid:48) = α s αα k ¯ I A (cid:48)(cid:48) ¯ J = 1 α k α ¯ D M A (cid:48) W ¯ D . (16)Further we will use the second decomposition.The matrix A (cid:48)(cid:48)(cid:48) is the matrix of surrounding minors of A with respect tominor α s . See prove in Theorem 4.Let α s L D U = A (cid:48)(cid:48)(cid:48) L d M = IW d U = I be a decomposition of the matrix A (cid:48)(cid:48)(cid:48) , then we can write the equality (cid:18) D D ∼ D
21 1 α s A (cid:48)(cid:48)(cid:48) (cid:19) = (cid:18) I L (cid:19) (cid:18) D D ∼ D D (cid:19) (cid:18) I U (cid:19) . (17)To prove this equality we can write matrices L , U , D , D as follows L = L I + ¯ I , U = J U + ¯ J , D = I D J , D = I D J , and check the equations L D = D and D ∼ U = D ∼ . As a result of the sequence (6), (8), (13), (15),(17) of decompositions we obtainthe LDU-decomposition of the matrix A in the form (cid:18) A A A A (cid:19) = αLDU, D = (cid:18) D D ∼ D D (cid:19) = (cid:18) D λ − D D D (cid:19) , (18) DU factorization 9 with D ∼ = λ − D and such matrices L and U : L = (cid:18) L I (cid:19) (cid:18) I α − k A (cid:48) D +11 I (cid:19) (cid:18) L ∼ L (cid:19) (cid:18) I α − α − k ¯ I A (cid:48)(cid:48) D ∼ +12 I (cid:19) (cid:18) I L (cid:19) ,U = (cid:18) I U (cid:19) (cid:18) I α − α − k D +21 A (cid:48)(cid:48) I (cid:19) (cid:18) U U ∼ (cid:19) (cid:18) I α − k D +11 A (cid:48) I (cid:19) (cid:18) U I (cid:19) . After multiplying the matrices on the right side, we get L = (cid:18) L L L (cid:19) = (cid:18) L L ∼ L L L (cid:19) , U = (cid:18) U U U (cid:19) = (cid:18) U U U U U ∼ (cid:19) , (19)with U = α − k U D +11 A (cid:48) + α − α − k D +21 A (cid:48)(cid:48) U ∼ == α − k J M A + α − α − l J M A (cid:48) , (20) L = α − k A (cid:48) D +11 L ∼ + α − α − k L ¯ I A (cid:48)(cid:48) D ∼ +12 == α − k A W I + α − α − m α − k ¯ D M A (cid:48) W I (21) (cid:98) D = α − r ( αD + ¯ D ) (22) M = (cid:98) D − L − , = (cid:98) D − (cid:18) L − − L − L L − L − (cid:19) = (cid:98) D − (cid:32) λ − (cid:98) D M (cid:98) D M − (cid:98) D M (cid:98) D M L λ − (cid:98) D M (cid:98) D M (cid:98) D M (cid:98) D M (cid:33) , (23) W = U − (cid:98) D − = (cid:18) U − − U − U U − U − (cid:19) (cid:98) D − = (cid:32) W (cid:98) D W (cid:98) D − W (cid:98) D W U W (cid:98) D J λ − W (cid:98) D W (cid:98) D J λ − W (cid:98) D (cid:33) (cid:98) D − . (24)In the expressions (18) - (24), the LDU decomposition of the matrix A aregiven and the matrices W and M that satisfy the conditions LdM = I and W dU = I are obtained.We proved the correctness of the following recursive algorithm. ( L, D, U, M, (cid:98)
D, W, α r ) = LDU ( A, α ) . (1) If ( A = 0) then { α r = α ; D = I = J = 0; L = U = ¯ I = ¯ J = ¯ D = I ; M = W = (cid:98) D = αI ; } (2) If ( n = 1 & A = [ a ] & a (cid:54) = 0) then { α r = a ; L = U = M = W = [ a ]; D = [( α ∗ a ) − ]; (cid:98) D = [ a − ]; J = I = [1]; ¯ I = ¯ J = ¯ D = [0]; } (3) If ( n ≥ A (cid:54) = 0) then { A = (cid:18) A A A A (cid:19) .(3.1) ( L , D , U , M , d , W , α k ) = LDU ( A , α ) ,A = M ∗ A ; A = a k ∗ (cid:98) D ∗ A ; A = ¯ D ∗ A /α ; A = A ∗ W ; A = a k ∗ A ∗ (cid:98) D ; A = A ∗ ¯ D /α ;(3.2) ( L , D , U , M , d , W , α l ) = LDU ( A , a k ) , (3.3) ( L , D , U , M , d , W , α m ) = LDU ( A , a k ) ,λ = a l a k ; a s = λ ∗ a m ; A = A ∗ D +11 ∗ A ; A = ( αa k ∗ A − A ) / ( αa k ); A = ¯ D ∗ M ∗ A ∗ W ∗ ¯ D ; A = A / ( a k α );(3.4) ( L , D , U , M , d , W , α r ) = LDU ( A , a s ) ,J λ = λ ∗ J + ¯ J ; I λ = λI + ¯ I ; L ∼ = L ∗ I λ ; U ∼ = J λ ∗ U ; U = J ∗ M ∗ A /a k + J ∗ M ∗ A / ( a l ∗ α ); L = A ∗ W ∗ I /a k + ¯ D ∗ M ∗ A ∗ W ∗ I / ( a m ∗ a k ∗ α ); L = (cid:18) L L ∼ L L L (cid:19) , D = (cid:18) D λ − D D D (cid:19) , U = (cid:18) U U U U U ∼ (cid:19) , (cid:98) D = α ( α r ) − D + ( α r ) − ¯ D,M = (cid:98) D − (cid:32) I λ − (cid:98) D M (cid:98) D M − (cid:98) D M (cid:98) D M L I λ − (cid:98) D M (cid:98) D M (cid:98) D M (cid:98) D M (cid:33) ,W = (cid:32) W (cid:98) D W (cid:98) D − W (cid:98) D W U W (cid:98) D J λ − W (cid:98) D W (cid:98) D J λ − W (cid:98) D (cid:33) (cid:98) D − . } DU factorization 11
The following statements prove the factorization algorithm.
Theorem 3.
Let M k and M s be corner blocks of size k × k and s × s , ( s > k , s = k + t ) of the matrix M , their determinants det k and det s not equal to zero.Let the matrix A be formed by the surrounding minors of the block M k and letit be divided into blocks A = (cid:18) A A A A (cid:19) , wherein A is a square block of size t × t and A ∗ is its adjoint matrix, then elements of matrix det k ( det s A − det t − k A A ∗ A ) (25) are the minors of the matrix M that surround the block M s . Proof.
Proof can be found in ([7], Theorem 2) or in ([8] pp. 23-25). In [8], thistheorem is called the “ Determinant Identity of Descent ”.You can see that Theorem 3 generalizes Theorem 2 if we assume that theblock A k can have size 0 and the determinant of such an empty block is 1. AndTheorem 2 is a special case of Theorem 3 if we consider each element of theoriginal matrix as a surrounding minor for an empty block. Theorem 4.
Let the matrix A be formed by the surrounding minors of the upperleft corner block α of the matrix M . Let matrix A be divided into blocks (4) andall equalities of system (5) are true, rank ( A ) = k and α k is the largest nonzero minor of A . Then matrices A (cid:48)(cid:48) and A (cid:48)(cid:48) (9) are matrices of surroundingminors with respect to minor α k , A (cid:48)(cid:48)(cid:48) (16) is the matrix of surrounding minorswith respect to minor α s . Proof.
To simplify writing the proof, we consider the case when the non-zero block D of matrix D be in the upper left corner of D and we denote by A anon-degenerate block of size k × k in upper left corner of the matrix A .We can write the LDU decomposition of matrix A and the equalities W d U = I and L d M = I in such block shape: A = (cid:18) A A A A (cid:19) = α (cid:18) L L I (cid:19) (cid:18) D
00 0 (cid:19) (cid:18) U U I (cid:19) , (cid:18) W W α k I (cid:19) (cid:18) α − k αD α − k I (cid:19) (cid:18) U U I (cid:19) = (cid:18) I I (cid:19) , (cid:18) L L I (cid:19) (cid:18) α − k αD α − k I (cid:19) (cid:18) M M α k I (cid:19) = (cid:18) I I (cid:19) . The determinant of the M submatrix, which is defined by all rows andcolumns of the minors α and the block A is equals α k . Due to the Sylvesterdeterminant identity (see [1]) we can write the equality: det ( A ) = α k α k − . From the first equality and Sylvester determinant identity we get αL D U = A , αL D U = A ,α k α k − ( U ) − ( D ) − ( L ) − = A ∗ , α k α k − ( U ) − ( D ) − = A ∗ L . The matrix A ∗ is the adjoin matrix for A . From the second equality we get αW D U = α r I, αW = α k ( U ) − ( D ) − , W = − αW D U . The consequence is the expressions for the blocks W and W : W = α − k A ∗ L , W = − α − k A ∗ A , W = (cid:18) W W α k I (cid:19) . (26)From the third equality we obtain the expressions αL D M = α k I , M = − αL D M , so M = α − k U A ∗ , M = − α − k A A ∗ , M = (cid:18) M − M α k I (cid:19) . (27)Let the matrix A be divided into two blocks A = ( A , A ) = (cid:18) A a A b A c A d (cid:19) ,then matrix A (cid:48)(cid:48) = α A W ¯ D can be written as follows: A (cid:48)(cid:48) = 1 α ( A , A ) (cid:18) α − k A ∗ L − α − k A ∗ A α k I (cid:19) (cid:18) I (cid:19) =(0 , α ( α k A − α k − A A ∗ A )) . According to Theorem 3, A (cid:48)(cid:48) is a matrix of surrounding minors with respect tothe block A .Let the matrix A be divided into two blocks A = (cid:18) A A (cid:19) = (cid:18) A a A b A c A d (cid:19) ,then matrix A (cid:48)(cid:48) = α ¯ D M A can be written as follows: A (cid:48)(cid:48) = 1 α (cid:18) I (cid:19) (cid:18) α − k U A ∗ − α − k A A ∗ α k I (cid:19) (cid:18) A A (cid:19) = (cid:18) α ( α k A − α k − A A ∗ A ) (cid:19) . According to Theorem 3, A (cid:48)(cid:48) is a matrix of surrounding minors with respectto the block A .Matrices W , D and M have such block shape: W D M = (cid:18) W W α k I (cid:19) (cid:18) D
00 0 (cid:19) (cid:18) M M α k I (cid:19) = (cid:18) W D M
00 0 (cid:19) .W D M = α − k A ∗ L D U A ∗ = α k α − k A ∗ . DU factorization 13
By definition (10) and expression (7) we have A (cid:48) = 1 αα k ( αα k A − A (cid:48) D +11 A (cid:48) ) = 1 α ( αα k A − A W (cid:98) D J αα k M A ) = α k A − αα k A ( W D M ) A = α k A − α − k A (cid:18) A ∗
00 0 (cid:19) A = (cid:18) A (cid:48) A (cid:48) A (cid:48) A (cid:48) (cid:19) . According to Theorem 3, α − A (cid:48) is a matrix of surrounding minors with respectto the minor α k .Let us denote by A (cid:48)(cid:48) i and A (cid:48)(cid:48) i ( i = 1 ..
4) the blocks of the matrices A (cid:48)(cid:48) and A (cid:48)(cid:48) , correspondingly, and denote by A (cid:48)(cid:48) I and A (cid:48) II the upper and lower blocks.of the matrix α − A (cid:48) : α − A (cid:48) = [ A (cid:48)(cid:48) I , A (cid:48) II ] T . Similarly to expressions (26) and(27), we obtain the following expressions:¯ D M = (cid:18) − α − l k A (cid:48)(cid:48) A (cid:48)(cid:48) ∗ α l I (cid:19) , W ¯ D = (cid:18) − α − m k A (cid:48)(cid:48) ∗ A (cid:48)(cid:48) α m I (cid:19) . According to Theorem 3, A (cid:48) L = α − k ¯ D M ( α − A (cid:48) ) = α − k (cid:18) − α − l k A (cid:48)(cid:48) A (cid:48)(cid:48) ∗ α l I (cid:19) (cid:18) A (cid:48) I A (cid:48) II (cid:19) = (cid:32) α k ( α l A (cid:48) I − α l − k A (cid:48)(cid:48) A (cid:48)(cid:48) ∗ A (cid:48) II ) (cid:33) is a matrix of surrounding minors with respect to the minor α l .Finally, let us turn to the matrix (16): A (cid:48)(cid:48)(cid:48) = 1 α k ( 1 α k ¯ D M ( 1 α A (cid:48) )) W ¯ D = 1 α k A (cid:48) L W ¯ D =1 α k (cid:0) A (cid:48) L A (cid:48) L (cid:1) (cid:18) − α − m k A (cid:48)(cid:48) ∗ A (cid:48)(cid:48) α m I (cid:19) = (cid:16) α k ( α m A (cid:48) L − α m − k A (cid:48) L A (cid:48)(cid:48) ∗ A (cid:48)(cid:48) ) (cid:17) = (cid:16) α l ( α s A (cid:48) L − α m − l A (cid:48) L A (cid:48)(cid:48) ∼∗ A (cid:48)(cid:48) ∼ ) (cid:17) . Here we introduced notations A (cid:48) L and A (cid:48) L for the left and right blocksof matrix A (cid:48) L , used definitions (11), (12) and Sylvester determinant identity.According to Theorem 3, A (cid:48)(cid:48)(cid:48) is a matrix of surrounding minors with respect tothe minor α s . Example
We give below an example of a LDU-decomposition of a matrix in the form ofthree identities A = LDU , L (cid:98) DM = I , W (cid:98) DU = I : −
35 3 2 10 − = −
30 0 03 0 10 0 − − −
10 0 − −
45 00 0 0 − , −
30 0 03 0 10 0 − − − − −
135 0 −
90 0 −
45 0 0 0675 0 0 13500 −
450 0 0 = I , −
90 675 −
45 0 0 − −
450 0 0 − − −
10 0 − −
45 00 0 0 − = I . The products of matrices (cid:98) DM and W (cid:98) D can be reduced to a triangular formby inserting the product of the permutation matrix and the inverse permutationmatrix between the factors: (cid:98) DM = − −
00 0 0 − −
45 0 0 00 −
450 0 0135 0 −
90 0675 0 0 1350 W (cid:98) D = −
90 0 675 900 − − − − − −
00 0 0 . Conclusion
A dichotomous
LDU factorization algorithm was proposed. It is applied to ma-trices in which the size is some power of 2. Such an algorithm is well parallelizedand efficient for a supercomputer with distributed memory due to the presenceof a coarse-grained block structure. If you want to find the decomposition ofan arbitrary rectangular matrix, you must first arrange it arbitrarily inside asquare matrix of a suitable size, perform the decomposition, and in the resultingfactors, you need to discard the extra zero parts of the matrices.As with all previous recursive algorithms, its complexity (up to a constant) isequal to the complexity ( n ω ) of matrix multiplication. Like other LU algorithms,it gives a gain of r/n times when applied to matrices of small rank r. But it is DU factorization 15 also efficient for full rank sparse matrices. An example of this type of matricesthat is important in applications is considered in the work [4].Since the decomposition of the upper right block and the lower left blockwill be performed simultaneously, it is desirable that these two blocks have morenonzero elements than the other two blocks. If there is an almost-diagonal, tridi-agonal or ribbon matrix, then it must first be multiplied by a permutation matrixso that the main diagonal is located in these two blocks.It should be noted that this algorithm is a generalization of the
LEU algo-rithm [3] to the case of a commutative domain. Therefore, it can be looked at asanother proof of the
LEU algorithm. We specifically emphasized the non-unityof the decomposition due to the fact that Eq. (15) can be applied in either of twoversions. The complexity of the proof of the