A greatest common divisor and a least common multiple of solutions of a linear matrix equation
aa r X i v : . [ m a t h . G M ] O c t Volodymyr P. Shchedryk email: [email protected] Institute for Applied Problems of Mechanics and Mathematics,National Academy of Sciences of Ukraine
A greatest common divisor and a least common multiple of soluti-ons of a linear matrix equation
We describe the explicit form of a left greatest common divisor and a leastcommon multiple of solutions of a solvable linear matrix equation over acommutative elementary divisor domain. We prove that these left greatestcommon divisor and least common multiple are also solutions of the sameequation.
Introduction . The investigation of solutions of linear matrix equationswas started in the second half of the XIX century by Sylvester. Many analyti-cal, approximate methods and algorithms for finding solutions have beendeveloped. The concepts of the Moore–Penrose inverse and the Kroneckerproduct were actively used. The question of separating of solutions withcertain prescribed properties was also considered. In particular, solutions wi-th the symmetry condition were studied in [1-3], Hermitian positively definedsolutions in [4,5], solutions with a minimal rank in [6], diagonal and triangularsolutions in [7].In the present article, we investigate solutions of the linear matrix equati-on BX = A over a commutative elementary divisor domain [8]. Note that acommutative elementary divisor domain is a commutative domain over whicheach matrix is equivalent to a diagonal matrix, each diagonal element (invari-ant factor) divides the next one. Examples of elementary divisor rings areEuclidean rings, principal ideal rings, adequate rings, a ring of formal powerseries over a field of rational numbers with a free integer term (see [9, 10]).1ccording to [11, Theorem 2, p. 218] the equation BX = A has a soluti-on if and only if the corresponding invariant factors of the matrices B and h A B i coincide. In particular, solutions of a solvable equation can befound using the method proposed in [11, Theorem 1, p. 215].Let BX = A be a solvable equation. Therefore there exists a matrix C such that A = BC.
It means that the matrix B is a left divisor of thematrix A . The contrary statement is also correct. Consequently, the matrixequation BX = A is solvable if and only if the matrix B is a left divisorof the matrix A . Therefore, the solutions of this equation are left quotientsof A by B . The structure of these quotients are described in [12, 13]. Aquotient of a by b in R is denoted as ab ∈ R . However, already the symbol AB in matrix rings over R does not make sense, because this quotient is notalways uniquely determined. Consequently, the question of choice of thatsolutions of a matrix equation BX = A which are "generating"others isarises. Since a left greatest common divisor of matrices over a commutativeelementary divisor domain (see definition below) defined uniquely up to rightassociativity (see [12, Theorem 1.11]) we can define AB as a greatest commondivisor of the solutions of a matrix equation BX = A . Logically, the questionarises: whether a left g.c.d. of solutions of BX = A is again a solution. Inthe proposed article we receive an affirmative answer. At the same time, wealso show that a left least common multiple of solutions of this equation hasthe same property. Main result . If A = BC , then B is called a left divisor of A, and A iscalled a right multiple of B. If A = DA and F = DB , then D is called aleft common divisor of A and F. In addition, if D is a right multiple of eachleft common divisor of A and F , then D is called a left greatest commondivisor ( g.c.d. ) of A and F .If S = M T = N K , then the matrix S is called a right common multiple of M and N . Moreover, if the matrix S is a right divisor of each right common2ultiple of M and N , then S is called a left least common multiple ( l.c.m. )of M and N .Our main result is the following. Theorem . Let R be a commutative elementary divisor domain. If BX = A is a solvable matrix equation over R , where A, X, B ∈ M n ( R ) thena left g.c.d. and a left l.c.m. of solutions of this equation are also solutionsof BX = A . Proof . By definition of the elementary divisor domain, the matrices
A, B can be written as A = P − E Q − and B = V − Φ U − , in which E = diag( ε , . . . , ε k , , . . . , , ε i | ε i +1 , i = 1 , . . . , k − , Φ = diag( ϕ , . . . , ϕ t , , . . . , , ϕ j | ϕ j +1 , j = 1 , . . . , t − , and P, V, Q, U ∈ GL n ( R ) . The matrix B is a left divisor of A (see [Theorem4.1, 12]), if and only if V = LP, where L belongs to the set L ( E , Φ) = { L ∈ GL n ( R ) | ∃ S ∈ M n ( R ) : L E = Φ S } . Moreover, t ≥ k and ϕ i | ε i ( i = 1 , . . . , k ) by [Theorem 4.6 , 12]. Furthermore, L ( E , Φ) = n L ∈ GL n ( R ) | L := h L L L L L io (1)(see [Theorem 4.5, 11]), in which L , L , L are arbitrary matrices of appropri-ate sizes, L = l l . . . l .k − l kϕ ( ϕ ,ε ) l l . . . l .k − l k . . . . . . . . . . . . . . . ϕ k ( ϕ k ,ε ) l k ϕ k ( ϕ k ,ε ) l k . . . ϕ k ( ϕ k ,ε k − ) l k.k − l kk ,L = ϕ k +1 ( ϕ k +1 ,ε ) l k +1 . . . . ϕ k +1 ( ϕ k +1 ,ε k ) l k +1 .k . . . . . . . . . ϕ t ( ϕ t ,ε ) l t . . . ϕ t ( ϕ t ,ε k ) l tk . B = ( LP ) − Φ U − . By definition of L ( E , Φ) , we have L E = Φ S, so E = L − Φ S. Now A = P − E Q − = P − ( L − Φ S ) Q − = ( LP ) − Φ SQ − = (( LP ) − Φ U − )( U SQ − ) = BC, where C := U SQ − . So a matrix C is the solution of the equation BX = A. If C = C is also solution (i.e. BC = A ), then A = BC = BC, so B ( C − C ) = . Hence, F := C − C ∈ Ann r ( B ) and C ∈ { C + Ann r ( B ) } , where Ann r ( B ) isthe right annihilator of B . Moreover, if C ∈ { C + Ann r ( B ) } , then C = C + N for some N ∈ Ann r ( B ) , and BC = B ( C + N ) = BC + BN = A + = A. Consequently, the set { C + Ann r ( B ) } consists of all solutions of the equation BX = A .Let us describe the form of elements of the set { C + Ann r ( B ) } . Using[Theorem 1.15, 12] and the fact that B = V − Φ U − , we obtain that Ann r ( B ) = n U (cid:2) t × n D (cid:3) | D ∈ M ( n − t ) × n ( R ) o . Note that C := U SQ − . Using the same idea as in the proof of [Theorem4.5, 12], we obtain that S = h M M M M i , in which M = ε ϕ l ε ϕ l . . . ε k − ϕ l .k − ε k ϕ l kε ( ϕ ,ε ) l ε ϕ l . . . ε k − ϕ l .k − ε k ϕ l k . . . . . . . . . . . . . . . ε ( ϕ k ,ε ) l k . . . . . . ε k − ( ϕ k ,ε k − ) l k.k − ε k ϕ k l kk , = ε ( ϕ k +1 ,ε ) l k +1 . . . . ε k ( ϕ k +1 ,ε k ) l k +1 .k . . . . . . . . . ε ( ϕ t ,ε ) l t . . . ε k ( ϕ t ,ε k ) l tk , and [ M M ] is an arbitrary matrix from M ( n − t ) × n ( R ) . Consequently, C + Ann r ( B ) = U SQ − + U (cid:2) t × n D (cid:3) = U (cid:0) S + (cid:2) t × n DQ (cid:3)(cid:1) Q − = U (cid:0) S + (cid:2) t × n D (cid:3)(cid:1) Q − . (2)Using the fact that Q is an invertible matrix, it is easy to check that M ( n − t ) × n ( R ) Q = M ( n − t ) × n ( R ) , so D := DQ (see (2)) can be any matrix from M ( n − t ) × n ( R ) . Consequently, C + Ann r ( B ) = U (cid:16)h M M M M i + (cid:2) t × n D (cid:3)(cid:17) Q − = U h M M T T i Q − , where [ T T ] is an arbitrary matrix from M ( n − t ) × n ( R ) . It follows that F := U h M M I n − t i Q − ∈ { C + Ann r ( B ) } , (3)where I n − t is the identity matrix of order n − t . Since { C + Ann r ( B ) } consistsof all solutions of BX = A , and U h M M T T i Q − = (cid:16) U h M M I n − t i Q − (cid:17) (cid:0) Q (cid:2) I t T T (cid:3) Q − (cid:1) = F M, the solution F of the equation BX = A is a left divisor of all elements from { C + Ann r ( B ) } . Using the fact that F ∈ { C + Ann r ( B ) } , we obtain that F is a left g.c.d. of { C + Ann r ( B ) } .Consider the matrix N := U h M M
00 0 i Q − = (cid:0) U [ I t
00 0 ] U − (cid:1) (cid:16) U h M M T T i Q − (cid:17) . (4)Obviously, N ∈ { C + Ann r ( B ) } and it is a left multiple of all elements of { C + Ann r ( B ) } , so N is a left l.c.m. of { C + Ann r ( B ) } . The proof is done. (cid:3) orollary 1 . Each left g.c.d. and each left l.c.m. of solutions of a solvableequation BX = A have the following forms: U h M M I n − t i Q − , U h M M
00 0 i Q − , respectively. (cid:3) Corollary 2 . The set n(cid:16) U h M M I n − t i Q − (cid:17) (cid:0) Q (cid:2) I t T T (cid:3) Q − (cid:1) | [ T T ] ∈ M ( n − t ) × n ( R ) o consists of all solutions of the equation BX = A. (cid:3) Example . Let A := diag(1 , , , , , , , B := − − −
12 0 12 0 00 0 0 0 0 0 0 − − are matrices over Z . It is easy to check that the Smith forms of matrices A and B are E := A and Φ := diag(1 , , , , , , , respectively.Moreover, B = V − Φ in which V − = − − − − − . BX = A over Z . Using (1) and the fact that Φ | E , we get that L ( E , Φ) = ∅ by [Theorem 4.5, 11] and L ( E , Φ) = ∗ ∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗ ∗ h ∗ ∗ ∗ ∗ ∗ ∗ h h h ∗ ∗ ∗ ∗ h h h ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ . The matrix A has the form A = I E I . It follows that V = LI = L (seenotation of the Theorem) and V = L = ∈ L ( E , Φ) . This yields that the equation BX = A has a solution (see [Theorem 4.1, 12]).Furthermore, L E = Φ S for a matrix S := ∗ ∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗ ∗ . BX = A are: and , respectively (see (3) and (4)). Moreover, these matrices are solutions of theequation BX = A . ⋄ Problem . Describe that rings over them a left g.c.d. and a left l.c.m. ofsolutions of a solvable matrix equation BX = A are also solutions.1. Vetter W. J.
Vector structures and solutions of linear matrix equations// Linear Algebra Appl. – 1975. – . – № 2. – P. 181-188.2. Magnus J. R., Neudecker H.
The elimination matrix: Some lemmasand applications // SIAM. J. on Algebraic and Discrete Methods. 1980.– . – № 4. – P. 422–449.3. Don F.J.
On the symmetric solutions of a linear matrix equation //Linear Algebra Appl. – 1987. – . – P. 1-7.4. Khatri C.G., Mitra S.K.
Hermitian and Nonnegative Definite Solutionsof Linear Matrix Equations // Journal on Applied Mathematics. – 1976.– . – № 4. – P. 579-585.5. Ran A., Reurings M.
A Fixed Point Theorem in Partially OrderedSets and Some Applications to Matrix Equations // Proceedings of theAmerican Mathematical Society. – 2004. – . – № 5. – P. 1435–1443.8.
Recht B., Fazel M., Parrilo P.
Guaranteed minimum-rank solutions oflinear matrix equations via nuclear norm minimization //doi.org/10.1137/070697835.7.
Magnus J.R.
L-structured matrices and linear matrix equations //Linear Algebra Appl. – 1983. – . – № 1. – P. 67-88.8. Kaplansky I . Elementary divisor and modules // Trans. Amer. Math.Soc. – 1949. – . – P. 464–491.9. Bovdi V.A., Shchedryk V.P.
Commutative Bezout domains of stablerange 1.5 // Linear Algebra and Appl. – 2019. – . – P. 127-134.10.
Zabavsky B.V.
Diagonal reduction of matrices over rings. – Lviv. –Math Studies Monograph Series. – VNTL Publishers. – 2012. – 251 p.11.
Kazimirsky P.S.
Decomposition of matrix polynomials into factors.Naukova dumka, 1981. – 224 p.12.
Shchedryk V.P.