2×2 block representations of the Moore-Penrose inverse and orthogonal projection matrices
aa r X i v : . [ m a t h . R A ] F e b × block representations of theMoore–Penrose inverse and orthogonalprojection matrices Bernd Fritzsche Conrad MädlerFebruary 4, 2021
In this paper, new block representations of Moore–Penrose inverses for arbitrarycomplex 2 × Keywords
Moore–Penrose inverse, generalized inverses of matrices, block representations, or-thogonal projection matrices
Mathematics Subject Classification (2010)
The aim of this paper is the following: Given an arbitrary complex ( p + q ) × ( s + t ) blockmatrix E = " a bc d (1.1)with p × s block a , we give new block representations E † = (cid:2) α βγ δ (cid:3) with s × p block α of theMoore–Penrose inverse E † of E as well as new block representations P R ( E ) = (cid:2) e e e e (cid:3) with p × p block e of the orthogonal projection matrix P R ( E ) onto the column space R ( E ) of E .The block entries should be given by expressions involving the blocks a , b , c , and d of E , wheregeneralized inverses are build only of matrices of the block sizes, i. e., with number of rows andcolumns from the set { p, q, s, t } . We will see that this goal can be realized (without additionalassumptions) by computation of { } -inverses of four (non-negative Hermitian) matrices of theblock sizes.To the best of our knowledge, except the above mentioned authors Hung/Markham [12] onlyMiao [13], Groß [9], and Yan [27] describe explicit block representations of the Moore–Penroseinverse of arbitrary complex 2 × E † . In [9], Groß gives a block representation of the Moore–Penrose inverse of non-negative Hermitian 2 × E † = E ∗ ( EE ∗ ) † and E † = ( E ∗ E ) † E ∗ can then easily be used to derive a block representation for the Moore–Penrose inverse E † of an arbitrary block matrix E . In Yan [27], a full rank factorization of E a, b, c, d is utilized to obtain a blockrepresentation of E † .Assuming certain additional conditions, e. g., on column spaces or ranks, several authorsderived block representations of the Moore–Penrose inverse of matrices, see, e. g. [4, 10, 14].The existence of a Banachiewicz–Schur form for E † is studied e. g. in [2, 21]. Furthermore,special classes of matrices were considered in this context, see, e. g. [18, 19] for so-called block k -circulant matrices. Block representations of E † involving regular transformations, e. g., per-mutations, have also been considered, see e. g. [11, 15]. A representation of the Moore–Penroseinverse of a block column or block row in terms of the block entries is given in [1]. Undera certain rank additivity condition, a block representation of m × n partitioned matrices canbe found in [20] as well. Several results on block representations of partitioned operatorsare obtained, e. g., in [6, 7, 24–26]. The here mentioned list of references on this topic is notexhaustive. Throughout this paper let m, n, p, q, s, t be positive integers. We denote by C m × n the set ofall complex m × n matrices and by C n := C n × the set of all column vectors with n complexentries. We write 0 m × n for the zero matrix in C m × n and I n for the identity matrix in C n × n .Let U and V be linear subspaces of C n . If U ∩ V = { n × } , then we write U ⊕ V for thedirect sum of U and V . Let U ⊥ be the orthogonal complement of U . If U ⊆ V , we use thenotation V ⊖ U := V ∩ ( U ⊥ ). We write R ( M ) and N ( M ) for the column space and the nullspace of a complex matrix M . Let M ∗ be the conjugate transpose of a complex matrix M . If M is an arbitrary complex m × n matrix, then there exists a unique complex n × m matrix X such that the four equations(1) M XM = M, (2) XM X = X, (3) ( M X ) ∗ = M X, (4) ( XM ) ∗ = XM (2.1)are fulfilled (see [16]). This matrix X is called the Moore–Penrose inverse of M and is desig-nated usually by the notation M † . Following [3, Ch. 1, Sec. 1, Def. 1], for each M ∈ C m × n we denote by M { j, k, . . . , ℓ } the set of all X ∈ C n × m which satisfy equations ( j ) , ( k ) , . . . , ( ℓ )from the equations (1)–(4) in (2.1). Each matrix belonging to M { j, k, . . . , ℓ } is said to bea { j, k, . . . , ℓ } -inverse of M . Remark . Let U be a linear subspace of C n . Then there exists a unique complex n × n ma-trix P U such that P U x ∈ U and x − P U x ∈ U ⊥ for all x ∈ C n . This matrix P U is calledthe orthogonal projection matrix onto U . If P ∈ C n × n , then P = P U if and only if the threeconditions P = P and P ∗ = P as well as R ( P ) = U are fulfilled. Furthermore, the equation P U ⊥ = I n − P U holds true.Our strategy to give a block representation of the Moore–Penrose inverse E † of the blockmatrix E given in (1.1) consists of three elementary steps: Step (I)
We consider the following factorization problem for orthogonal projection matri-ces: Find a complex ( s + t ) × ( p + q ) matrix R = (cid:2) r r r r (cid:3) fulfilling P R ( E ) = ER with blockentries r ∈ C s × p , r ∈ C s × q , r ∈ C t × p , and r ∈ C t × q expressible explicitly only using theblock entries a , b , c , and d of E . Remark . Let M ∈ C m × n and let X ∈ C n × m . In view of Remark 2.1, then:2a) P R ( M ) = M X if and only if X ∈ M { , } .(b) P R ( M ∗ ) = XM if and only if X ∈ M { , } .We constructing a suitable { , } -inverse R of E using: Theorem 2.3 (Urquhart [22], see also, e. g. [8, Ch. 1, Sec. 5, Thm. 3]) . Let M ∈ C m × n .(a) Let G := M M ∗ and let G (1) ∈ G { } . Then M ∗ G (1) ∈ M { , , } .(b) Let H := M ∗ M and let H (1) ∈ H { } . Then H (1) M ∗ ∈ M { , , } . Applying Theorem 2.3, we will get an explicit block representation of P R ( E ) = ER in termsof a , b , c , and d . Step (II)
Analogous to Step (I) we construct a suitable complex ( s + t ) × ( p + q ) matrix L = (cid:2) ℓ ℓ ℓ ℓ (cid:3) fulfilling L ∈ E { , } and hence P R ( E ∗ ) = LE . Step (III)
With the matrices L and R we apply: Theorem 2.4 (Urquhart [22], see e. g. [8, Ch. 1, Sec. 5, Thm. 4]) . If M ∈ C m × n , then M (1 , M M (1 , = M † for every choice of M (1 , ∈ M { , } and M (1 , ∈ M { , } . Regarding Remark 2.2, Theorem 2.4 admits the following reformulation:
Remark . Let M ∈ C m × n and let L ∈ C n × m and R ∈ C n × m be such that LM = P R ( M ∗ ) and M R = P R ( M ) . Then M † = LM R .Consider an additive decomposition M = U + V of an m × n matrix M with two m × n ma-trices U and V , fulfilling U V ∗ = 0 m × m . In this situation, a result of Cline [5] is applica-ble to obtain a non-trivial representation of M † as a sum of U † and a further matrix. ByHung/Markham [12] a decomposition E = U + V with U = (cid:2) a c (cid:3) and V = (cid:2) c d (cid:3) is used inthis way to derive a block representation of E † involving only Moore–Penrose inverses of blocksize matrices.Regarding Remarks 2.2 and 2.1 as well as (2.1), the orthogonal projection matrix Q := P R ( U ∗ ) fulfills U Q = U U † U = U and V Q = ( QV ∗ ) ∗ = ( U † U V ∗ ) ∗ = 0 m × n . Consequently, M Q = U and M ( I n − Q ) = V . Conversely, given M ∈ C m × n and an orthogonal projection matrix Q ∈ C n × n , then it is readily checked that U := M Q and V := M ( I n − Q ) fulfill M = U + V and U V ∗ = 0 m × m . Thus, every decomposition M = U + V with U V ∗ = 0 m × m can be writtenas M = M Q + M ( I n − Q ) with some orthogonal projection matrix Q ∈ C n × n occurringon the right-hand side and vice versa. Although not explicitly using Cline’s theorem, ourinvestigations involve an analogous decomposition, namely E = P S E + ( I p + q − P S ) E with theorthogonal projection matrix P S onto the linear subspace S spanned by the first s columnsof E occurring on the left-hand side (see Lemma 3.2). We consider a complex ( p + q ) × ( s + t ) matrix E . Let (1.1) be the block representation of E with p × s block a . Setting Y := [ a, b ] , Z := [ c, d ] , S := " ac , T := " bd (3.1)3hen E = " YZ , E = [ S, T ] . (3.2)Let µ := aa ∗ + bb ∗ , σ := a ∗ a + c ∗ c, (3.3) ζ := cc ∗ + dd ∗ , τ := b ∗ b + d ∗ d,ρ := ca ∗ + db ∗ , λ := a ∗ b + c ∗ d. (3.4)In view of (3.1), then µ = Y Y ∗ , σ = S ∗ S, (3.5) ζ = ZZ ∗ , τ = T ∗ T, (3.6) ρ = ZY ∗ , λ = S ∗ T. (3.7)Choose µ (1) ∈ µ { } and σ (1) ∈ σ { } . Regarding (3.5), then Theorem 2.3 shows that Y (1 , , := Y ∗ µ (1) , S (1 , , := σ (1) S ∗ (3.8)fulfill Y (1 , , ∈ Y { , , } , S (1 , , ∈ S { , , } . (3.9)Let φ := c − ( ca ∗ + db ∗ ) µ (1) a, ψ := d − ( ca ∗ + db ∗ ) µ (1) b, (3.10) η := b − aσ (1) ( a ∗ b + c ∗ d ) , θ := d − cσ (1) ( a ∗ b + c ∗ d ) . (3.11)Because of (3.8), (3.7), (3.1), (3.4), (3.10), and (3.11), then V := [ φ, ψ ] and W := " ηθ (3.12)admit the representations Z ( I s + t − Y (1 , , Y ) = Z − ZY ∗ µ (1) Y = Z − ρµ (1) Y = [ c, d ] − ρµ (1) [ a, b ] = [ φ, ψ ] = V (3.13)and ( I p + q − SS (1 , , ) T = T − Sσ (1) S ∗ T = T − Sσ (1) λ = " bd − " ac σ (1) λ = " ηθ = W. (3.14)Using (3.13), (3.14), (3.9), Remarks 2.2 and 2.1, and [ R ( Y ∗ )] ⊥ = N ( Y ), we can infer V = Z P N ( Y ) , W = P [ R ( S )] ⊥ T. (3.15)4et ν := φφ ∗ + ψψ ∗ , ω := η ∗ η + θ ∗ θ. (3.16)In view of (3.12), (3.15), and Remark 2.1 then ν = V V ∗ = Z P N ( Y ) Z ∗ = V Z ∗ , ω = W ∗ W = T ∗ P [ R ( S )] ⊥ T = T ∗ W. (3.17)Choose ν (1) ∈ ν { } and ω (1) ∈ ω { } . Regarding (3.17), then Theorem 2.3 shows that V (1 , , := V ∗ ν (1) and W (1 , , := ω (1) W ∗ (3.18)fulfill V (1 , , ∈ V { , , } and W (1 , , ∈ W { , , } . (3.19)Obviously, we have µ ∈ C p × p ≥ , σ ∈ C s × s ≥ , ζ ∈ C q × q ≥ , τ ∈ C t × t ≥ , ν ∈ C q × q ≥ , ω ∈ C t × t ≥ , ρ ∈ C q × p , and λ ∈ C s × t , where C n × n ≥ denotes the set of all non-negative Hermitian complex n × n matrices. Remark . Let L := h ( I s + t − V (1 , , Z ) Y (1 , , , V (1 , , i , R := " S (1 , , ( I p + q − T W (1 , , ) W (1 , , . (3.20)Regarding (3.20), (3.18), (3.8), (3.7), (3.1), and (3.12), then L = h ( I s + t − V ∗ ν (1) Z ) Y ∗ µ (1) , V ∗ ν (1) i = h ( Y ∗ − V ∗ ν (1) ZY ∗ ) µ (1) , V ∗ ν (1) i = h ( Y ∗ − V ∗ ν (1) ρ ) µ (1) , V ∗ ν (1) i = [ Y ∗ µ (1) , ( s + t ) × q ] + [ − V ∗ ν (1) ρµ (1) , V ∗ ν (1) ]= Y ∗ µ (1) [ I p , p × q ] + V ∗ ν (1) [ − ρµ (1) , I q ]= " a ∗ b ∗ µ (1) [ I p , p × q ] + " φ ∗ ψ ∗ ν (1) [ − ρµ (1) , I q ] = " ℓ ℓ ℓ ℓ , where ℓ := φ ∗ ν (1) , ℓ := ( a ∗ − ℓ ρ ) µ (1) , ℓ := ψ ∗ ν (1) , ℓ := ( b ∗ − ℓ ρ ) µ (1) , (3.21)and R = " σ (1) S ∗ ( I p + q − T ω (1) W ∗ ) ω (1) W ∗ = " σ (1) ( S ∗ − S ∗ T ω (1) W ∗ ) ω (1) W ∗ = " σ (1) ( S ∗ − λω (1) W ∗ ) ω (1) W ∗ = " σ (1) S ∗ t × ( p + q ) + " − σ (1) λω (1) W ∗ ω (1) W ∗ = " I s t × s σ (1) S ∗ + " − σ (1) λI t ω (1) W ∗ = " I s t × s σ (1) [ a ∗ , c ∗ ] + " − σ (1) λI t ω (1) [ η ∗ , θ ∗ ] = " r r r r , where r := ω (1) η ∗ , r := σ (1) ( a ∗ − λr ) , r := ω (1) θ ∗ , r := σ (1) ( c ∗ − λr ) . (3.22)5 emma 3.2. Let E := R ( E ) , let S := R ( S ) , and let W := R ( W ) . Then W = E ⊖ S and P E = P S + P W = ER .Proof. We first check W = E ∩ ( S ⊥ ). Because of (3.19), (3.9), and Remark 2.2(a), we have P W = W W (1 , , and P S = SS (1 , , . According to Remark 2.1, hence I p + q − SS (1 , , = P S ⊥ .By virtue of (3.14), then W = P S ⊥ T follows. From Remark 2.1 we know R ( P S ⊥ ) = S ⊥ .Consequently, W ⊆ S ⊥ . Regarding (3.2) and (3.14), we have S ⊆ E and W = ( I p + q − SS (1 , , ) T = T − SS (1 , , T = [ S, T ] " − S (1 , , TI t = E " − S (1 , , TI t , implying W ⊆ E . Thus, W ⊆ E ∩ ( S ⊥ ) is proved. Now we consider an arbitrary w ∈ E ∩ ( S ⊥ ).Then w ∈ E ; so there exists some v ∈ C s + t with w = Ev . Let v = (cid:2) xy (cid:3) be the blockrepresentation of v with x ∈ C s . Regarding (3.2), then w = Sx + T y . In view of w ∈ S ⊥ and Sx ∈ S , furthermore P S ⊥ w = w and P S ⊥ Sx = 0 ( p + q ) × . Taking additionally into account W = P S ⊥ T , we obtain then w = P S ⊥ w = P S ⊥ ( Sx + T y ) = P S ⊥ T y = W y, implying w ∈ W . Thus, we have also shown E ∩ ( S ⊥ ) ⊆ W . Therefore, W = E ∩ ( S ⊥ )holds true. Since S ⊆ E , hence W = E ⊖ S . Consequently, P W = P E − P S follows (see,e. g. [23, Thm. 4.30(c)]). Thus, P E = P S + P W . Taking additionally into account P S = SS (1 , , and P W = W W (1 , , as well as (3.14), (3.2), and (3.20), then we can conclude P E = SS (1 , , + W W (1 , , = SS (1 , , + ( I p + q − SS (1 , , ) T W (1 , , = SS (1 , , ( I p + q − T W (1 , , ) + T W (1 , , = [ S, T ] " S (1 , , ( I p + q − T W (1 , , ) W (1 , , = ER . The following result can be proved analogously. We omit the details.
Lemma 3.3.
Let ˜ E := R ( E ∗ ) , let ˜ Y := R ( Y ∗ ) , and let ˜ V := R ( V ∗ ) . Then ˜ V = ˜ E ⊖ ˜ Y and P ˜ E = P ˜ Y + P ˜ V = LE .Remark . From Lemma 3.2 and Remark 2.2(a) we can infer R ∈ E { , } , whereas Lemma 3.3and Remark 2.2(b) yield L ∈ E { , } .Now we obtain the announced block representations of orthogonal projection matrices. Proposition 3.5.
Let E be a complex ( p + q ) × ( s + t ) matrix and let (1.1) be the block rep-resentation of E with p × s block a . Let S be given by (3.1) . Let σ and λ be given by (3.3) and (3.4) . Let σ (1) ∈ σ { } . Let η, θ and W be given by (3.11) and (3.12) . Let ω be given by (3.16) and let ω (1) ∈ ω { } . Then P R ( E ) = Sσ (1) S ∗ + W ω (1) W ∗ = aσ (1) a ∗ + ηω (1) η ∗ aσ (1) c ∗ + ηω (1) θ ∗ cσ (1) a ∗ + θω (1) η ∗ cσ (1) c ∗ + θω (1) θ ∗ . Proof.
In the proof of Lemma 3.2, we have already shown P R ( E ) = SS (1 , , + W W (1 , , .Taking additionally into account (3.8), (3.18), (3.1), and (3.12), the assertions follow.Now we are able to prove a 2 × heorem 3.6. Let E be a complex ( p + q ) × ( s + t ) matrix and let (1.1) be the block repre-sentation of E with p × s block a . Let Y, S be given by (3.1) . Let µ, σ and ρ, λ be given by (3.3) and (3.4) . Let µ (1) ∈ µ { } and let σ (1) ∈ σ { } . Let φ, ψ and η, θ be given by (3.10) and (3.11) . Let V and W be given by (3.12) . Let ν and ω be given by (3.16) . Let ν (1) ∈ ν { } andlet ω (1) ∈ ω { } . Then E † = LER = " α βγ δ = ( Y ∗ − V ∗ ν (1) ρ ) µ (1) aσ (1) ( S ∗ − λω (1) W ∗ )+( Y ∗ − V ∗ ν (1) ρ ) µ (1) bω (1) W ∗ + V ∗ ν (1) cσ (1) ( S ∗ − λω (1) W ∗ ) + V ∗ ν (1) dω (1) W ∗ with α := ℓ ar + ℓ br + ℓ cr + ℓ dr β := ℓ ar + ℓ br + ℓ cr + ℓ dr γ := ℓ ar + ℓ br + ℓ cr + ℓ dr δ := ℓ ar + ℓ br + ℓ cr + ℓ dr where, for each j, k ∈ { , } , the matrices ℓ jk and r jk are given by (3.21) and (3.22) , resp.Proof. According to Lemmas 3.3 and 3.2, we have P R ( E ∗ ) = LE and P R ( E ) = ER . Thus, wecan apply Remark 2.5 to obtain E † = LER . Using Remark 3.1 and (1.1), we furthermoreobtain
LER = h ( Y ∗ − V ∗ ν (1) ρ ) µ (1) , V ∗ ν (1) i " a bc d σ (1) ( S ∗ − λω (1) W ∗ ) ω (1) W ∗ = ( Y ∗ − V ∗ ν (1) ρ ) µ (1) aσ (1) ( S ∗ − λω (1) W ∗ ) + ( Y ∗ − V ∗ ν (1) ρ ) µ (1) bω (1) W ∗ + V ∗ ν (1) cσ (1) ( S ∗ − λω (1) W ∗ ) + V ∗ ν (1) dω (1) W ∗ as well as LER = " ℓ ℓ ℓ ℓ a bc d r r r r = " α βγ δ . In this section, we give some examples of applications of the block representations of orthogonalprojection matrices and Moore–Penrose inverses given in Proposition 3.5 and Theorem 3.6. Inorder to avoid lengthy formulas, we give only hints for computations.
Example . Let N be a positive integer and let U and V be two complementary linearsubspaces of C N , i. e., the subspaces U and V fulfill U ⊕ V = C N . Then there exists a uniquecomplex N × N matrix P U , V such that P U , V x = u for all x ∈ C N , where x = u + v is theunique representation of x with u ∈ U and v ∈ V . This matrix P U , V is called the obliqueprojection matrix on U along V and admits the representations P U , V = ( P V ⊥ P U ) † = [( I N − P V ) P U ] † , (4.1)see [8] or also [3, Ch. 2, Sec. 7, Ex. 60, formula (80)]. Assume that N = p + q and that U = R ( E ) and V = R ( F ) for two matrices E ∈ C ( p + q ) × ( s + t ) and F ∈ C ( p + q ) × ( m + n ) with blockrepresentations (1.1) and F = (cid:2) e fg h (cid:3) , where a is a p × s block and e is a p × m block. Then (4.1)together with Proposition 3.5 and Theorem 3.6 could be used to obtain a block representationof P U , V in terms of a, b, c, d and e, f, g, h . 7 xample . Because U ⊕ ( U ⊥ ) = C n holds true for every linear subspace U of C n , theMoore–Penrose inverse of matrices is a special case of the uniquely determined { , } -inversewith simultaneously prescribed column space and null space. More precisely, if M ∈ C m × n and linear subspaces U of C m and V of C n with R ( M ) ⊕ U = C m and N ( M ) ⊕ V = C n aregiven, then there exists a unique complex n × m matrix X such that the four conditions M XM = M, XM X = X, R ( X ) = V , and N ( X ) = U are fulfilled. This matrix X is denoted by M (1 , U , V and admits the representations M (1 , V , U = P V , N ( M ) M (1) P R ( M ) , U = ( P V ⊥ P N ( M ) ) † M (1) ( P [ R ( M )] ⊥ P U ) † (4.2)with every M (1) ∈ M { } (see, e. g. [3, Ch. 2, Sec. 6, Thm. 12]). In particular, (4.2) is validfor M (1) = M † . Furthermore, M † = M (1 , N ( M )] ⊥ , [ R ( M )] ⊥ . Example 4.1 could be used to obtain ablock representation of M (1 , V , U , if the subspaces U and V are given as column spaces of certainmatrices, partitioned accordingly to a block representation of M , and if a matrix M (1) ∈ M { } the corresponding block representation of which is known. (In particular, for M (1) = M † onecan use Theorem 3.6.) In this final section, we give alternative representations of the matrices L and R occurring inTheorem 3.6 and Lemmas 3.3 and 3.2. Utilizing these representations, further block represen-tations of the Moore–Penrose inverse E † could possibly be obtained, in particular, in the caseof E satisfying additional conditions. We will not pursue this direction any further here. Wecontinue to use the notations given above. Lemma 5.1 (Rohde [17], see, e. g. [8, Ch. 5, Sec. 2, Ex. 10(a)]) . Let M ∈ C ( p + q ) × ( p + q ) ≥ andlet M = (cid:2) m m m m (cid:3) be the block representation of M with p × p block m . Let m (1)11 ∈ m { } and let ς := m − m m (1)11 m . Let ς (1) ∈ ς { } . Then M (1) := m (1)11 + m (1)11 m ς (1) m m (1)11 − m (1)11 m ς (1) − ς (1) m m (1)11 ς (1) belongs to M (1) ∈ M { } .Remark . Regarding (3.17), (3.13), (3.14), (3.8), (3.6), and, (3.7), we can infer ν = V Z ∗ = Z ( I s + t − Y (1 , , Y ) Z ∗ = Z ( I s + t − Y ∗ µ (1) Y ) Z ∗ = ζ − ρµ (1) ρ ∗ (5.1)and ω = T ∗ W = T ∗ ( I p + q − SS (1 , , ) T = T ∗ ( I p + q − Sσ (1) S ∗ ) T = τ − λ ∗ σ (1) λ. (5.2) Lemma 5.3.
Let G := EE ∗ and H := E ∗ E . (5.3)8 et µ (1) ∈ µ { } and ν (1) ∈ ν { } and let σ (1) ∈ σ { } and ω (1) ∈ ω { } . Then the matrices G (1) := µ (1) + µ (1) ρ ∗ ν (1) ρµ (1) − µ (1) ρ ∗ ν (1) − ν (1) ρµ (1) ν (1) (5.4) and H (1) := " σ (1) + σ (1) λω (1) λ ∗ σ (1) − σ (1) λω (1) − ω (1) λ ∗ σ (1) ω (1) (5.5) fulfill G (1) ∈ G { } and H (1) ∈ H { } .Proof. Clearly, G ∈ C ( p + q ) × ( p + q ) ≥ and H ∈ C ( s + t ) × ( s + t ) ≥ . Regarding (3.2), (3.5), and (3.6), wehave G = (cid:2) YZ (cid:3) [ Y ∗ , Z ∗ ] = (cid:2) Y Y ∗ Y Z ∗ ZY ∗ ZZ ∗ (cid:3) = (cid:2) µ ρ ∗ ρ ζ (cid:3) and H = (cid:2) S ∗ T ∗ (cid:3) [ S, T ] = (cid:2) S ∗ S S ∗ TT ∗ S T ∗ T (cid:3) = (cid:2) σ λλ ∗ τ (cid:3) .Taking additionally into account Remark 5.2, thus from Lemma 5.1 the assertions immediatelyfollow.Finally, in the following result, we not only get new representations for the matrices L and R occurring in Theorem 3.6 and Lemmas 3.3 and 3.2, but also obtain their belonging to theset E { , , } and E { , , } , resp., thereby improving Remark 3.4. Lemma 5.4.
The matrices L and R admit the representations L = E ∗ G (1) and R = H (1) E ∗ and fulfill L ∈ E { , , } and R ∈ E { , , } .Proof. Using (3.9) and Remarks 2.2 and 2.1, we have ( Y (1 , , Y ) ∗ = P ∗R ( Y ∗ ) = P R ( Y ∗ ) = Y (1 , , Y and, analogously, ( SS (1 , , ) ∗ = SS (1 , , . Taking additionally into account (3.13),(3.14), (3.8), and (3.7), we thus can conclude V ∗ = ( I s + t − Y (1 , , Y ) Z ∗ = Z ∗ − Y ∗ µ (1) Y Z ∗ = Z ∗ − Y ∗ µ (1) ρ ∗ (5.6)and W ∗ = T ∗ ( I p + q − SS (1 , , ) = T ∗ − T ∗ Sσ (1) S ∗ = T ∗ − λ ∗ σ (1) S ∗ . (5.7)Regarding (3.2), (5.4), (5.5), (5.1), (5.2), (5.6), (5.7), and Remark 3.1, we obtain E ∗ G (1) = [ Y ∗ , Z ∗ ] " I p q × p µ (1) [ I p , p × q ] + " − µ (1) ρ ∗ I q ν (1) [ − ρµ (1) , I q ] ! = Y ∗ µ (1) [ I p , p × q ] + ( Z ∗ − Y ∗ µ (1) ρ ∗ ) ν (1) [ − ρµ (1) , I q ] = L (5.8)and H (1) E ∗ = " I s t × s σ (1) [ I s , s × t ] + " − σ (1) λI t ω (1) [ − λ ∗ σ (1) , I t ] ! " S ∗ T ∗ = " I s t × s σ (1) S ∗ + " − σ (1) λI t ω (1) ( T ∗ − λ ∗ σ (1) S ∗ ) = R . (5.9)According to Lemma 5.3, we have G (1) ∈ G { } and H (1) ∈ H { } . Taking additionally intoaccount (5.3), (5.8), and (5.9), then L ∈ E { , , } and R ∈ E { , , } follow from Theorem 2.3.Observe that Lemma 5.4 in connection with Theorem 3.6 yields a factorization E † = LER with particular matrices L ∈ E { , , } and R ∈ E { , , } . This gives a special factorization ofthe kind mentioned in Urquhart’s result (Theorem 2.4), whereby all matrices can be expressedexplicitly in terms of the block entries a, b, c, d of the given matrix E .9 eferences [1] Baksalary, J.K. and O.M. Baksalary: Particular formulae for the Moore-Penrose in-verse of a columnwise partitioned matrix . Linear Algebra Appl., 421(1):16–23, 2007. https://doi.org/10.1016/j.laa.2006.03.031 .[2] Baksalary, J.K. and G.P.H. Styan:
Generalized inverses of partitionedmatrices in Banachiewicz-Schur form . Vol. 354, pp. 41–47. 2002. https://doi.org/10.1016/S0024-3795(02)00334-8 , Ninth special issue on linearalgebra and statistics.[3] Ben-Israel, A. and T.N.E. Greville:
Generalized inverses , vol. 15 of
CMS Books in Math-ematics/Ouvrages de Mathématiques de la SMC . Springer-Verlag, New York, second ed.,2003. Theory and applications.[4] Castro-González, N., M.F. Martínez-Serrano, and J. Robles:
Expressions for the Moore-Penrose inverse of block matrices involving the Schur complement . Linear Algebra Appl.,471:353–368, 2015. https://doi.org/10.1016/j.laa.2015.01.003 .[5] Cline, R.E.:
Representations for the generalized inverse of sums of matrices . J. Soc. Indust.Appl. Math. Ser. B Numer. Anal., 2:99–114, 1965.[6] Deng, C.Y. and H.K. Du:
Representations of the Moore-Penrose inverse of × block operator valued matrices . J. Korean Math. Soc., 46(6):1139–1150, 2009. https://doi.org/10.4134/JKMS.2009.46.6.1139 .[7] Deng, C.Y. and H.K. Du: Representations of the Moore-Penrose inverse for a class of2-by-2 block operator valued partial matrices . Linear Multilinear Algebra, 58(1-2):15–26,2010. https://doi.org/10.1080/03081080801980457 .[8] Greville, T.N.E.:
Solutions of the matrix equation
XAX = X , and relations be-tween oblique and orthogonal projectors . SIAM J. Appl. Math., 26:828–832, 1974. https://doi.org/10.1137/0126074 .[9] Groß, J.: The Moore-Penrose inverse of a partitioned nonnegative definite matrix . Vol. 321,pp. 113–121. 2000. https://doi.org/10.1016/S0024-3795(99)00073-7 , Linear algebraand statistics (Fort Lauderdale, FL, 1998).[10] Hartwig, R.E.:
Rank factorization and Moore-Penrose inversion . Indust. Math., 26(1):49–63, 1976.[11] He, C.N.:
General forms for Moore-Penrose inverses of matrices by block permutation . J.Nat. Sci. Hunan Norm. Univ., 29(4):1–5, 2006.[12] Hung, C.H. and T.L. Markham:
The Moore-Penrose inverse of a par-titioned matrix M = ( AB DC ). Linear Algebra Appl., 11:73–86, 1975. https://doi.org/10.1016/0024-3795(75)90118-4 .[13] Miao, J.M.:
General expressions for the Moore-Penrose inverse of a × block matrix . Lin-ear Algebra Appl., 151:1–15, 1991. https://doi.org/10.1016/0024-3795(91)90351-V .1014] Mihailović, B., V. Miler Jerković, and B. Malešević: Solving fuzzy linear systems using ablock representation of generalized inverses: the Moore-Penrose inverse . Fuzzy Sets andSystems, 353:44–65, 2018. https://doi.org/10.1016/j.fss.2017.11.007 .[15] Milovanović, G.V. and P.S. Stanimirović:
On Moore-Penrose inverse of block matrices andfull-rank factorization . Publ. Inst. Math. (Beograd) (N.S.), 62(76):26–40, 1997.[16] Penrose, R.:
A generalized inverse for matrices . Proc. Cambridge Philos. Soc., 51:406–413,1955.[17] Rohde, C.A.:
Generalized inverses of partitioned matrices . J. Soc. Indust. Appl. Math.,13:1033–1035, 1965.[18] Smith, R.L.:
Moore-Penrose inverses of block circulant and block k -circulant matrices . Linear Algebra Appl., 16(3):237–245, 1977. https://doi.org/10.1016/0024-3795(77)90007-6 .[19] Tang, S. and H.Z. Wu: The Moore-Penrose inverse and the weighted Drazin inverse ofblock k -circulant matrices . J. Hefei Univ. Technol. Nat. Sci., 32(9):1442–1444, 1448, 2009.[20] Tian, Y.: The Moore-Penrose inverses of m × n block matricesand their applications . Linear Algebra Appl., 283(1-3):35–60, 1998. https://doi.org/10.1016/S0024-3795(98)10049-6 .[21] Tian, Y. and Y. Takane: More on generalized inverses of partitioned matriceswith Banachiewicz-Schur forms . Linear Algebra Appl., 430(5-6):1641–1655, 2009. https://doi.org/10.1016/j.laa.2008.06.007 .[22] Urquhart, N.S.:
Computation of generalized inverse matrices which satisfy specified con-ditions . SIAM Rev., 10:216–218, 1968. https://doi.org/10.1137/1010035 .[23] Weidmann, J.:
Linear operators in Hilbert spaces , vol. 68 of
Graduate Texts in Mathe-matics . Springer-Verlag, New York-Berlin, 1980. Translated from the German by JosephSzücs.[24] Xu, Q.:
Moore-Penrose inverses of partitioned adjointable operators onHilbert C ∗ -modules . Linear Algebra Appl., 430(11-12):2929–2942, 2009. https://doi.org/10.1016/j.laa.2009.01.003 .[25] Xu, Q., Y. Chen, and C. Song: Representations for weighted Moore-Penrose in-verses of partitioned adjointable operators . Linear Algebra Appl., 438(1):10–30, 2013. https://doi.org/10.1016/j.laa.2012.08.002 .[26] Xu, Q. and X. Hu:
Particular formulae for the Moore-Penrose inverses of the par-titioned bounded linear operators . Linear Algebra Appl., 428(11-12):2941–2946, 2008. https://doi.org/10.1016/j.laa.2008.01.021 .[27] Yan, Z.Z.:
New representations of the Moore-Penrose inverse of × block matrices . LinearAlgebra Appl., 456:3–15, 2014. https://doi.org/10.1016/j.laa.2012.08.014 .11niversität LeipzigFakultät für Mathematik und InformatikPF 10 09 20D-04009 LeipzigGermany [email protected]@[email protected]@math.uni-leipzig.de