Structured mapping problems for linearly structured matrices
aa r X i v : . [ m a t h . NA ] S e p Structured mapping problems for linearlystructured matrices
Bibhas Adhikari ∗ and Rafikul Alam † Abstract.
Given an appropriate class of structured matrices S , we characterize matrices X and B for which there exists a matrix A ∈ S such that AX = B and determine all matrices in S mapping X to B. We also determine all matrices in S mapping X to B and having the smallest norm. We usethese results to investigate structured backward errors of approximate eigenpairs and approximateinvariant subspaces, and structured pseudospectra of structured matrices. Keywords.
Structured matrices, structured backward errors, Jordan and Lie algebras, eigen-values, eigenvectors, invariant subspaces.
AMS subject classification(2000):
Consider a stable linear time-invariant (LTI) control system˙ x = Ax + Bu, x (0) = 0 ,y = Cx + Du, (1)with A ∈ K n × n , B ∈ K n × p , C ∈ K p × n and D ∈ K p × p . Here K := R or C , u is the input, x isthe state and y is the output. The system (1) is said to be passive if the Hamiltonian matrix H = (cid:20) F GH − F ∗ (cid:21) := (cid:20) A − BR − C − BR − B ∗ − C ∗ R − C − ( A − BR − C ) ∗ (cid:21) (2)has no purely imaginary eigenvalues, where R := D + D ∗ , see [3, 6, 2]. A matrix H ∈ K n × n of the form H = (cid:20) A FG − A ∗ (cid:21) is called Hamiltonian, where G ∗ = G and F ∗ = F. Equivalently, H is Hamiltonian ⇐⇒ ( J H ) ∗ = J H , where J := (cid:20) I − I (cid:21) and I the identity matrix of size n. For passivation problem, when purely imaginary eigenvalues occur, one tries to perturb H by a Hamiltonian matrix E with small norm so that the perturbed matrix H + E has nopurely imaginary eigenvalues. If such an E exists, then for some X ∈ K n × p and D ∈ K p × p , we have ( H + E ) X = XD = ⇒ E X = B := H X − XD.
This leads us to the following mapping problem.
Problem 1. (Hamiltonian mapping problem)
Given
X, B ∈ K n × p , considerHam( X, B ) := {H ∈ K n × n : ( J H ) ∗ = J H and H X = B } ,σ Ham ( X, B ) := inf {kHk : H ∈
Ham(
X, B ) } . ∗ CoE Systems Science, IIT Jodhpur, India, E-mail: [email protected] † Department of Mathematics, IIT Guwahati, India, E-mail: rafi[email protected], Fax: +91-361-2690762/2582649.
Characterize
X, B ∈ K n × p for which Ham( X, B ) = ∅ and determine all matrices inHam( X, B ) . • Also determine all optimal solutions H o ∈ Ham(
X, B ) such that kH o k = σ Ham ( X, B ) . Motivated by
Problem 1 , we now consider structured mapping problem for various classesof structured matrices. Let S denote a class of structured matrices in K n × n . The class S weconsider in this paper is either a Jordan or a Lie algebra associated with an appropriate scalarproduct on K n . This provides a general setting that encompasses important classes of struc-tured matrices such as Hamiltonian, skew-Hamiltonian, symmetric, skew-symmetric, pseu-dosymmetric, persymmetric, Hermitian, Skew-Hermitian, pseudo-Hermitian, pseudo-skew-Hermitian, to name only a few, see [9]. We, therefore, consider the following problem.
Problem 2. (Structured Mapping Problem)
Let S ⊂ K n × n be a class of structuredmatrices and let X, B ∈ K n × p . Set S ( X, B ) := { A ∈ S : AX = B } ,σ S ( X, B ) := inf {k A k : A ∈ S ( X, B ) } . • Existence:
Characterize
X, B ∈ K n × p for which S ( X, B ) = ∅ . • Characterization:
Determine all matrices in S ( X, B ) . Also determine all optimal so-lutions A o ∈ S ( X, B ) such that k A o k = σ S ( X, B ) . We mention that structured backward error of an approximate invariant subspace of astructured matrix also leads to a structured mapping problem. A subspace X is invariantunder A if A X ⊂ X . Problem 3. (Structured backward error)
Let S ⊂ K n × n be a class of structured matricesand A ∈ S . Let X be a subspace of K n . Set ω S ( A, X ) := min {k△ A k : △ A ∈ S and ( A + △ A ) X ⊂ X } . Find all E ∈ S such ( A + E ) X ⊂ X and k E k = ω S ( A, X ) . If such a matrix E ∈ S exists then ( A + E ) X ⊂ X ⇒ EV = V R − AV =: B for some R and a full column rank matrix V whose columns form a basis of X . This shows thatstructured mapping problem naturally arises when analyzing structured backward error of anapproximate invariant subspace.Solutions of structured and unstructured mapping problems for a pair of vectors x and b in K n have been studied extensively, see [9] and the references therein. In fact, for a pairof vectors x and b in K n , a complete solution of the structured mapping problem has beenprovided in [9] when the class S ⊂ K n × n of structured matrices is a Jordan or a Lie algebraassociated with an orthosymmetric scalar product on K n . For a pair of matrices X and B in K n × p , existence and characterization of solutions to AX = B have been discussed forHermitian solutions in [10, 13] and for skew-Hermitian and symmetric solutions in [14]. Also,for the Frobenius norm, [13] provides an optimal Hermitian solution and [14] provides optimalskew-Hermitian and symmetric solutions to AX = B. The main contributions of this paper are as follows. We provide a complete solutionof the structured mapping problem (
Problem 2 ) when the class S ⊂ K n × n of structuredmatrices is a Jordan or a Lie algebra associated with an orthosymmetric scalar product on K n . We show that for the spectral norm there are infinitely many optimal solutions whereasfor the Frobenius norm the optimal solution is unique. We determine all optimal solutionsfor the spectral and the Frobenius norms. We show that the results in [9] obtained for apair of vectors follow as special cases of our general results. Finally, as an application of thestructured mapping problem, we analyze structured backward errors of approximate invariantsubspaces, approximate eigenpairs, and structured pseudospectra of structured matrices.2 otation.
We denote the spectrum and the trace of a square matrix A by Λ( A ) and Tr( A ) , respectively. Also, we denote the subspace spanned by the columns of an m -by- n matrix X by span( X ) and the complex conjugate of X by ¯ X. Let K n × n denote the set of all n -by- n matrices with entries in K , where K = R or C . Wedenote the transpose of a matrix X ∈ K m × n by X T and the conjugate transpose by X H . We now briefly define structured matrices that we consider in this paper; see [9] for furtherdetails. Let M ∈ K n × n be unitary. Assume further that M is either symmetric or skew-symmetric or Hermitian or skew-Hermitian. Define the scalar product h· , ·i M : K n × K n → K by h x, y i M := (cid:26) y T M x, bilinear form, y H M x, sesquilinear form. (3)Then for A ∈ K n × n there is a unique adjoint operator A ⋆ relative to the scalar product (3)such that h Ax, y i M = h x, A ⋆ y i M for all x, y ∈ K n . The adjoint A ⋆ is explicitly given by A ⋆ = (cid:26) M − A T M, bilinear form, M − A H M, sesquilinear form. (4)Consider the Lie algebra L and the Jordan algebra J associated with the scalar product (3)given by L := { A ∈ K n × n : A ⋆ = − A } and J := { A ∈ K n × n : A ⋆ = A } . (5)In this paper, we consider S = L or S = J , and refer to the matrices in S as structured matrices. The Jordan and Lie algebras so defined provide a general framework for analyzing a greatmany important classes of structured matrices including Hamiltonian, skew-Hamiltonian,symmetric, skew-symmetric, pseudosymmetric, persymmetric, Hermitian, Skew-Hermitian,pseudo-Hermitian, pseudo-skew-Hermitian matrices, to name only a few, see (Table 2.1, [9]).For the rest of the paper, we set sym := { A ∈ K n × n : A T = A } , skew - sym := { A ∈ K n × n : A T = − A } , Herm := { A ∈ C n × n : A H = A } , skew - Herm := { A ∈ C n × n : A H = − A } . Also define the set M S by M S := { M A : A ∈ S } . Then, in view of (4) and (5), it follows that S ∈ { J , L } ⇐⇒ M S ∈ { sym , skew - sym , Herm , skew - Herm } . (6)This shows that the four classes of structured matrices, namely, symmetric, skew-symmetric,Hermitian and skew-Hermitian matrices are prototypes of more general structured matricesbelonging to the Jordan and Lie algebras given in (5).Let A, B, C and D be matrices. Then the matrix T := (cid:20) A CB D (cid:21) is called a dilation of A. The norm preserving dilation problem is then stated as follows. Given matrices
A, B, C anda positive number µ ≥ max (cid:18)(cid:13)(cid:13)(cid:13)(cid:13)(cid:20) AB (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) , (cid:13)(cid:13)(cid:2) A C (cid:3)(cid:13)(cid:13) (cid:19) , (1)find all possible D such that (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) A CB D (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) ≤ µ. Theorem 2.1 (Davis-Kahan-Weinberger, [4])
Let
A, B, C be given matrices. Then forany positive number µ satisfying (1), there exists D such that (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) A CB D (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) ≤ µ. Indeed, those D which have this property are exactly those of the form D = − KA H L + µ ( I − KK H ) / Z ( I − L H L ) / , here K H := ( µ I − A H A ) − / B H , L := ( µ I − AA H ) − / C and Z is an arbitrary contraction,that is, k Z k ≤ . We mention that when ( µ I − A H A ) is singular, the inverses in K H and L are replacedby their Moore-Penrose pseudo-inverses (see, [11]). An interesting fact about Theorem 2.1is that if T ( D ) is symmetric or skew-symmetric or Hermitian or skew-Hermitian then thesolution matrices D are, respectively, symmetric or skew-symmetric or Hermitian or skew-Hermitian [1]. For compact representations of our results, in the rest of the paper, we write A ∗ to denotethe transpose A T or the conjugate transpose A H . Often we write A ∗ with ∗ ∈ { T, H } . Withthis notational convention, define the map F ∗ : K n × p × K n × p → K n × n by F ∗ ( X, B ) := BX † + ( BX † ) ∗ − ( X † ) ∗ ( X ∗ B ) ∗ X † , if ( X ∗ B ) ∗ = X ∗ BBX † − ( BX † ) ∗ − ( X † ) ∗ ( X ∗ B ) ∗ X † , if ( X ∗ B ) ∗ = − X ∗ BBX † , else (7)where X † is the Moore-Penrose pseudoinverse of X and ∗ ∈ { T, H } . We write F ∗ = F T when ∗ = T and F ∗ = F H when ∗ = H. Then it follows that F ∗ hasthe following properties.1. If X T B is symmetric/skew-symmetric then F T ( X, B ) ∈ sym / skew - sym .
2. If X H B is Hermitian/skew-Hermitian then F H ( X, B ) ∈ Herm / skew - Herm .3. If BX † X = B then F ∗ ( X, B ) X = B. This shows that the matrix F ∗ ( X, B ) is a potential candidate for solution of the structuredmapping problem for four classes of structured matrices, namely, sym , skew - sym , Herm and skew - Herm . More generally, the following result provides a necessary and sufficient conditionfor existence of solution of the structured mapping problem.
Theorem 3.1 (Existence)
Let ( X, B ) ∈ K n × p × K n × p and S ∈ { J , L } . Then there is amatrix A ∈ S such that AX = B if and only if (a) BX † X = B and (b) the condition inTable 1 holds. M S = J S = L M T = M ( X T M B ) T = X T M B ( X T M B ) T = − X T M BM T = − M ( X T M B ) T = − X T M B ( X T M B ) T = X T M BM H = M ( X H M B ) H = X H M B ( X H M B ) H = − X H M BM H = − M ( X H M B ) H = − X H M B ( X H M B ) H = X H M B
Table 1: Condition for S ( X, B ) = ∅ . Proof:
Suppose that there exists A ∈ S such that AX = B. Then X ∗ M B = X ∗ M AX for ∗ ∈ {
T, H } . Since
M A ∈ M S , by (6) it follows that X ∗ M B is symmetric/skew-symmetric(resp., Hermitian/skew-Hermitian) when ∗ = T (resp., ∗ = H ). Hence the conditions in (b)are satisfied. Again since AX = B, we have BX † X = AXX † X = AX = B. Conversely, suppose that the conditions are satisfied. Then setting A := M − F ∗ ( X, M B ) , it follows from (6) and the properties of F ∗ that AX = B and A ∈ S , where ∗ ∈ { T, H } . Thiscompletes the proof. (cid:4) emark 3.2 If X has full column rank then X † X = I. Consequently, BX † X = B. Thus fora full column rank matrix X the condition (a) in Theorem 3.1 is automatically satisfied. We mention that for the special case when x ∈ K n and b ∈ K n , by Theorem 3.1, we obtainthe following necessary and sufficient condition provided in [9]: M S = J S = L M T = M any x, b ∈ K n x T M b = 0 M T = − M x T M b = 0 any x, b ∈ K n M H = M x H M b ∈ R x H M b ∈ i R M H = − M x H M b ∈ i R x H M b ∈ R Table 2: Necessary and sufficient condition for S ( x, b ) = ∅ Now given a pair of matrices X and B in K n × p satisfying the conditions in Theorem 3.1,the following result characterizes solution of the structured mapping problem. Theorem 3.3 (Characterization)
Let ( X, B ) ∈ K n × p × K n × p and S ∈ { J , L } . Suppose that S ( X, B ) = ∅ . Then A ∈ S ( X, B ) if and only if A is of the form A = (cid:26) M − F T ( X, M B ) + M − ( I − XX † ) T Z ( I − XX † ) , if M S ∈ { sym , skew - sym } ,M − F H ( X, M B ) + M − ( I − XX † ) Z ( I − XX † ) , if M S ∈ { Herm , skew - Herm } , for some Z such that Z ∈ M S . (a) Frobenius norm: Define A o := M − F T ( X, M B ) when M S ∈ { sym , skew - sym } , and A o := M − F H ( X, M B ) when M S ∈ { Herm , skew - Herm } . Then A o is the unique matrixin S ( X, B ) such that A o X = B and that σ S ( X, B ) = k A o k F = p k BX † k F − Tr(
M BX † ( M BX † ) H ( XX † ) T ) , when M S ∈ { sym , skew - sym } , p k BX † k F − Tr(
M BX † ( M BX † ) H XX † ) , when M S ∈ { Herm , skew - Herm } . (b) Spectral norm: Set σ unstruc ( X, B ) := σ S ( X, B ) when S = K n × n . Then σ S ( X, B ) = k BX † k = σ unstruc ( X, B ) . Suppose that rank( X ) = r. Consider the SVD X = U Σ V H and partition U as U = [ U U ] , where U ∈ K n × r . Case-I. If M S ∈ { sym , skew - sym } then consider the matrix A o := M − F T ( X, M B ) − M − ( I − XX † ) T KU H M BX † U K T ( I − XX † ) + M − f ( Z ) , where f ( Z ) = µU ( I − U T KK H U ) / Z ( I − U H KK T U ) / U H , µ = k BX † k ,K = ( M BX † U ( µ I − U H M BX † M BX † U ) − / , when M S = sym ,M BX † U ( µ I + U H M BX † M BX † U ) − / , when M S = skew - sym ,and Z is an arbitrary contraction such that Z = Z T (resp., Z = − Z T ) when M S = sym (resp. M S = skew - sym ). Then A o ∈ S ( X, B ) and k A o k = σ S ( X, B ) . ase-II. If M S = Herm then consider the matrix A o := M − F H ( X, M B ) − M − ( I − XX † ) KU H M BX † U K H ( I − XX † ) + M − f ( Z ) , where f ( Z ) = µU ( I − U H KK H U ) / Z ( I − U H KK H U ) / U H , µ = k BX † k ,K = M BX † U ( µ I − U H M BX † M BX † U ) − / , and Z = Z H is an arbitrary contraction. Then A o ∈ S ( X, B ) and k A o k = σ S ( X, B ) . If M S = skew - Herm then define A o := − i G ( X, iB ) , where G ( Y, D ) ∈ S ( Y, D ) is such that kG ( Y, D ) k = σ S ( Y, D ) when M S = Herm . Then A o ∈ S ( X, B ) and k A o k = σ S ( X, B ) . Proof:
First, observe that AX = B ⇐⇒ M AX = M B.
Consequently, we have A ∈ S ( X, B ) ⇐⇒ M A ∈ M S ( X, M B ) . Further, since M is unitary, σ S ( X, B ) = σ M S ( X, M B )for the spectral and the Frobenius norms. By (6),
M S ∈ { sym , skew - sym , Herm , skew - Herm } . Thus, it boils down to proving the results for symmetric, skew-symmetric, Hermitian andskew-Hermitian matrices. Indeed, if φ ( X, B ) is a symmetric or skew-symmetric or Hermitianor skew-Hermitian solution of AX = B then M − φ ( X, M B ) ∈ S ( X, B ) . Further, note that a skew-Hermitian solution of AX = B can be obtained from a Hermitiansolution of iAX = iB and vice versa. Indeed, if AX = B and X H B is skew-Hermitian then iAX = iB and iX H B is Hermitian. Hence φ ( X, iB ) is a Hermitian solution of iAX = iB and − iφ ( X, iB ) is a skew-Hermitian solution of AX = B. Consequently, we only need to provethe results for symmetric, skew-symmetric and Hermitian matrices. We prove these resultsseparately in Theorem 3.5. Hence the proof. (cid:4)
Remark 3.4 (a) We mention that the solution set S ( X, B ) as characterized in Theorem 3.3can be written compactly as S ( X, B ) = (cid:26) M − F T ( X, MB ) + M − ( I − XX † ) T M S ( I − XX † ) , if M S ∈ { sym , skew - sym } ,M − F H ( X, MB ) + M − ( I − XX † ) M S ( I − XX † ) , if M S ∈ { Herm , skew - Herm } . Here x + S := { x + s : s ∈ S } . (b) We also mention that when X has full column rank, for the Frobenius norm, we have. σ S ( X, B ) = q k B ( X H X ) − / k F − k ( X T X ) − / X T M B ( X H X ) − / k F when M S ∈ { sym , skew - sym } , and σ S ( X, B ) = q k B ( X H X ) − / k F − k ( X H X ) − / X H M B ( X H X ) − / k F when M S ∈ { Herm , skew - Herm } . Indeed, when X has full column rank, we have X † X = I and X † = ( X H X ) − X H and hence the results follow. On the other hand, for the spectral norm,we have σ S ( X, B ) = k B ( X H X ) − / k . To complete the proof of Theorem 3.3, for the rest of this section, we consider S to besuch that S ∈ { sym , skew - sym , Herm , skew - Herm } . Observe that if A ∈ K n × n is given by A := (cid:20) A ± A ∗ A A (cid:21) then k A k F = (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) A A (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) F − k A k F + k A k F ! / . (8)We repeatedly use this fact in the sequel. 6 heorem 3.5 (Special solutions) Let ( X, B ) ∈ K n × p × K n × p and S ∈ { sym , skew - sym , Herm } . Suppose that S ( X, B ) = ∅ . Then A ∈ S ( X, B ) if and only if A is of the form A = (cid:26) F T ( X, B ) + ( I − XX † ) T Z ( I − XX † ) , if S ∈ { sym , skew - sym } , F H ( X, B ) + ( I − XX † ) Z ( I − XX † ) , if S = Herm , for some Z ∈ S . (a) Frobenius norm: Consider A o := F T ( X, B ) when S ∈ { sym , skew - sym } , and A o := F H ( X, B ) when S = Herm . Then A o is a unique matrix in S ( X, B ) such that σ S ( X, B ) = k A o k F = p k BX † k F − Tr( BX † ( BX † ) H ( XX † ) T ) , when S ∈ { sym , skew - sym } , p k BX † k F − Tr( BX † ( BX † ) H XX † ) , when S = Herm . (b) Spectral norm: We have σ S ( X, B ) = k BX † k = σ unstruc ( X, B ) . Suppose that rank( X ) = r. Consider the SVD X = U Σ V H and partition U as U = [ U U ] , where U ∈ K n × r . Case-I. If S ∈ { sym , skew - sym } then consider the matrix A o := F T ( X, B ) − ( I − XX † ) T KU H BX † U K T ( I − XX † ) + f ( Z ) , where f ( Z ) = µU ( I − U T KK H U ) / Z ( I − U H KK T U ) / U H , µ = k BX † k ,K = ( BX † U ( µ I − U H BX † BX † U ) − / , when S = sym ,BX † U ( µ I + U H BX † BX † U ) − / , when S = skew - sym ,and Z is an arbitrary contraction such that Z = Z T (resp., Z = − Z T ) when S = sym (resp. S = skew - sym ). Then A o ∈ S ( X, B ) and k A o k = σ S ( X, B ) . Case-II. If S = Herm then consider the matrix A o := F H ( X, B ) − ( I − XX † ) KU H BX † U K H ( I − XX † ) + f ( Z ) , where f ( Z ) = µU ( I − U H KK H U ) / Z ( I − U H KK H U ) / U H , µ = k BX † k ,K = BX † U ( µ I − U H BX † BX † U ) − / , and Z = Z H is an arbitrary contraction. Then A o ∈ S ( X, B ) and k A o k = σ S ( X, B ) . Proof:
First, suppose that S = sym . By assumption there exists A ∈ S be such that AX = B. Note that Range( X ) = Range( U ) . Thus representing A relative to the decomposition A : Range( X ) ⊕ Range( X ) ⊥ → Range( X ) ⊕ Range( X ) ⊥ , we have A = U U H AU U H . Set b A = U T AU = (cid:20) A A T A A (cid:21) . Then b A ∈ S and k A k = k b A k for the spectral and the Frobenius norms. Set Σ := Σ(1 : r, r ) . Let V = [ V , V ] be aconformal partition of V such that X = U Σ V . Now AX = B ⇒ U b AU H X = B. This gives (cid:20) A A T A A (cid:21) (cid:20) U H U H (cid:21) X = U T B = (cid:20) U T U T (cid:21) B ⇒ (cid:20) A A T A A (cid:21) (cid:20) Σ V H (cid:21) = (cid:20) U T BU T B (cid:21) A Σ V H = U T B and A Σ V H = U T B. Therefore, we have A = U T BV Σ − = U T BX † U , A = U T BV Σ − . Notice that A is symmetric if and only if X T B = B T X and BX † X = B. Indeed X T B = B T X gives V Σ U T B = B T U Σ V H and thus B T U = V Σ U T BV Σ − . Now( U T BX † U ) T = U T ( X † ) T B T U = U T U Σ − V T B T U = Σ − V T V Σ U T BV Σ − = U T BV Σ − = U T BX † U as desired. Thus we have b A = (cid:20) U T BX † U ( U T BV Σ − ) T U T BV Σ − A (cid:21) (9)Then by (8) we have k b A k F = 2 k BX † k F − Tr( BX † ( BX † ) H ( XX † ) T ) + k A k F . Hence, for theFrobenius norm, setting A = 0 we obtain a unique matrix A = U (cid:20) U T BX † U ( U T BV Σ − ) T U T BV Σ − (cid:21) (cid:20) U H U H (cid:21) = BX † + ( BX † ) T − ( XX † ) T BX † = F T ( X, B )such that A ∈ S ( X, B ) and k A k F = p k BX † k F − Tr( BX † ( BX † ) H ( XX † ) T ) = σ S ( X, B ) . Now from (9) we have A = U (cid:20) U T BX † U ( U T BV Σ − ) T U T BV Σ − A (cid:21) (cid:20) U H U H (cid:21) = (cid:2) U U (cid:3) (cid:20) U T BX † U U H + Σ − V T B T U U H U T BV Σ − U H + A U H (cid:21) = ( U U H ) T BX † + ( V Σ − U H ) T B T ( I − U U H ) + ( I − U U H ) T BX † + U A U H = ( XX † ) T BX † + ( X † ) T B T ( I − XX † ) + ( I − XX † ) T BX † + U A U H = BX † + ( BX † ) T − ( XX † ) T BX † + U U T U A U H U U H = BX † + ( BX † ) T − ( X † ) T X T BX † + ( I − XX † ) T Z ( I − XX † )= F T ( X, B ) + ( I − XX † ) T Z ( I − XX † ) , where Z ∈ S is arbitrary.For the spectral norm, again consider the matrix b A given in (9) and set µ := (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) U T BV Σ − U T BV Σ − (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) = k U T BV Σ − k = k BX † k . Then it follows that k b A k ≥ µ. Now by Theorem 2.1 we have k b A k = µ when A = − K A K T + µ ( I − K K H ) / Z ( I − K K T ) / (10)where K = U T BX † U ( µ I − U H BX † BX † U ) − / and Z is an arbitrary contraction suchthat Z = Z T (resp., Z = − Z T ) when S = sym (resp., S = skew - sym ). Thus we have A = BX † + ( BX † ) T − ( XX † ) T BX † + U A U H such that A ∈ S , AX = B and k A k = k BX † k = σ S ( X, B ) , where A is given in (10). Uponsimplification we obtain the desired form of A. Next suppose that S = skew - sym . Again representing A relative to the decomposition A : Range( X ) ⊕ Range( X ) ⊥ → Range( X ) ⊕ Range( X ) ⊥ ,
8e have A = U U H AU U H . Set b A = U T AU = (cid:20) A − A T A A (cid:21) . Then b A ∈ S and k A k = k b A k for the spectral and the Frobenius norms. By assumption AX = B ⇒ U b AU H X = B. Thisgives (cid:20) A − A T A A (cid:21) (cid:20) U H U H (cid:21) X = U T B = (cid:20) U T U T (cid:21) B ⇒ (cid:20) A − A T A A (cid:21) (cid:20) Σ V H (cid:21) = (cid:20) U T BU T B (cid:21) . Consequently we have A Σ V H = U T B and A Σ V H = U T B and hence b A = (cid:20) U T BX † U − ( U T BV Σ − ) T U T BV Σ − A (cid:21) . (11)As before, setting A = 0 in (11) we obtain the desired results for the Frobenius norm.As for the spectral norm, applying Theorem 2.1 to the matrix b A in (11) and followingsteps similar to those in the case when S = sym , we obtain the desired results.Finally, suppose that S = Herm . Then representing A relative to the decomposition A : Range( X ) ⊕ Range( X ) ⊥ → Range( X ) ⊕ Range( X ) ⊥ , we have A = U U H AU U H . Set b A = U H AU = (cid:20) A A H A A (cid:21) . Then b A ∈ S and k A k = k b A k forthe spectral and the Frobenius norms. Now AX = B ⇒ U b AU H X = B. This gives (cid:20) A A H A A (cid:21) (cid:20) U H U H (cid:21) X = U H B = (cid:20) U H U H (cid:21) B ⇒ (cid:20) A A H A A (cid:21) (cid:20) Σ V H (cid:21) = (cid:20) U H BU H B (cid:21) . Consequently, we have A Σ V H = U H B and A Σ V H = U H B and hence b A = (cid:20) U H BX † U ( U H BV Σ − ) H U H BV Σ − A (cid:21) (12)Now by (8) we have k b A k F = 2 k BX † k F − Tr( BX † ( BX † ) H XX † ) + k A k F . Hence setting A = 0 in (12), we obtain a unique matrix A = U (cid:20) U H BX † U ( U H BV Σ − ) H U H BV Σ − (cid:21) (cid:20) U H U H (cid:21) = BX † + ( BX † ) H − XX † BX † = F H ( X, B )such that A ∈ S ( X, B ) and k A k F = p k BX † k F − Tr( BX † ( BX † ) H XX † ) = σ S ( X, B ) . Now from (12) we have A = U (cid:20) U H BX † U ( U H BV Σ − ) H U H BV Σ − A (cid:21) (cid:20) U H U H (cid:21) = (cid:2) U U (cid:3) (cid:20) U H BX † U U H + Σ − V H B H U U H U H BV Σ − U H + A U H (cid:21) = ( U U H ) BX † + ( V Σ − U H ) H B H ( I − U U H ) + ( I − U U H ) BX † + U A U H = XX † BX † + ( X † ) H B H ( I − XX † ) + ( I − XX † ) BX † + U A U H = BX † + ( BX † ) H − ( X † ) H X H BX † + ( I − XX † ) Z ( I − XX † )= F H ( X, B ) + ( I − XX † ) Z ( I − XX † ) , where Z ∈ S is arbitrary.For the spectral norm, again consider the matrix b A given in(12) and set µ := (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) U H BV Σ − U H BV Σ − (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) = k U H BV Σ − k = k BX † k . k b A k ≥ µ. Now by Theorem 2.1 we have k b A k = µ when A = − K A K H + µ ( I − K K H ) / Z ( I − K K H ) / (13)where K = U H BX † U ( µ I − U H BX † BX † U ) − / and Z = Z H is an arbitrary contraction.Thus we have A = BX † + ( BX † ) H − XX † BX † + U A U H = F H ( X, B ) + U A U H suchthat A ∈ S ( X, B ) and k A k = k BX † k = σ S ( X, B ) , where A is given in (13). Finally, uponsimplification, we obtain the desired form of A. This completes the proof. (cid:4)
We now consider a few applications of the structured mapping problem. As before, we consider S ∈ { J , L } . Given a full column rank matrix X ∈ K n × p , a matrix D ∈ K p × p and a structuredmatrix A ∈ S , we say that ( X, D ) is an invariant pair for A if AX = XD.
Then a partiallyprescribed inverse eigenvalue problem can be stated as follows.
Problem-I.
Given a full column rank matrix X ∈ K n × p and a matrix D ∈ K p × p , find amatrix A o ∈ S , if it exists, such that A o X = XD and k A o k = τ S ( X, D ) , where τ S ( X, D ) := inf {k A k : A ∈ S and AX = XD } . When A o ∈ S exists, ( X, D ) provides a partial spectral decomposition of A o in the sensethat Λ( D ) ⊂ Λ( A ) and that span( X ) is an invariant subspace of A corresponding to Λ( D ) . Obviously,
Problem-I is a structured mapping problem for X and B := XD.
Consequently,it has a solution if and only if X and B := XD satisfy the conditions in Theorem 3.1, thatis, if and only if ( X ∗ M XD ) ∗ = X ∗ M XD or ( X ∗ M XD ) ∗ = − X ∗ M XD for ∗ ∈ {
T, H } . Anoptimal solution A o is then given by Theorem 3.3 with τ S ( X, D ) = k A o k = σ S ( X, XD ) for thespectral and the Frobenius norms. Note that
Problem-I has a solution when X ∗ M X = 0and in such a case the subspace spanned by the columns of X is called M -neutral - in short X is M -neutral. Thus if X is M -neutral then for any D ∈ K p × p , ( X, D ) is an invariant pairfor some A ∈ S . A related problem which arises when analyzing backward errors of approximate invariantpairs is as follows.
Problem-II.
Given a full column rank matrix X ∈ K n × p , a matrix D ∈ K p × p and a struc-tured matrix A ∈ S , find a structured matrix E o ∈ S , if it exists, such that E o X = XD and k E o − A k = η S ( A, X, D ) , where η S ( A, X, D ) := inf {k E − A k : E ∈ S and EX = XD } . Problem-II occurs, for example, when Krylov subspace method such as the Arnoldimethod is used to compute a few eigenvalues of a (large) matrix A. Indeed, starting with aunit vector v ∈ C n , after p steps of Arnoldi method we have AV p = V p H p + αv p +1 e Tp +1 , where V p := [ v , . . . , v p ] , H p is upper hessenberg and { v , . . . , v p +1 } is an orthonormal basis ofthe Krylov subspace K ( v , A ) := span( v , Av , . . . , A p v ) . Thus, when | α | is small, ( V p , H p ) isan approximate invariant pair of A and hence the backward error of ( V p , H p ) may be gainfullyused in analyzing errors in the computed eigenvalues of A. Writing E = A + △ A, it follows that Problem-II is a structured mapping problemfor X and B := XD − AX.
Indeed, if △ A ∈ S is such that ( A + △ A ) X = XD then △ AX = XD − AX = B. Consequently, △ A ∈ S exists if and only if X and B = XD − AX satisfy the conditions in Theorem 3.1. An optimal solution △ A o is then given by Theorem 3.3with η S ( A, X, D ) = k△ A o k = σ S ( X, XD − AX )10or the spectral and the Frobenius norms. The quantity η S ( A, X, D ) is the structured backwarderror of (
X, D ) as an approximate invariant pair of A. Note that the conditions in Theorem 3.1are satisfied if and only if X H M XD − X H M AX (resp., X T M XD − X T M AX ) is Hermitianor skew-Hermitian (resp., symmetric or skew-symmetric). In particular, this condition issatisfied when M H = − M, S = L and X ∈ K n × p is M -neutral. This fact plays an importantrole in spectral perturbation analysis of Hamiltonian matrices, see [2].We now consider a special case when X = x ∈ C n and D = λ ∈ C and derive structuredbackward error η S ( A, x, λ ) of an approximate eigenpair ( λ, x ) of A. Set r := λx − Ax.
Thenthere is a matrix E ∈ S such that ( A + E ) x = λx if and only if x and r satisfy the conditionin Table 2. Let η S ( A, x, λ ) (resp., η S F ( A, x, λ )) denote η S ( A, x, λ ) for the spectral norm (resp.,Frobenius norm). Then by Theorem 3.3, we have the following.
Corollary 4.1
Let S ∈ { J , L } and A ∈ S . Suppose that λ ∈ C and x ∈ C n \ { } satisfy thecondition in Table 2. Then we have η S ( A, x, λ ) = k r k / k x k and η S F ( A, x, λ ) = q k r k − |h r, x i M | . Define E by E := ( x T r ) M − xx H + M − xr T ( I − xx H ) + M − ( I − xx T ) rx H , if M A ∈ sym M − ( I − xx T ) rx H − M − xr T ( I − xx H ) , if M A ∈ skew - sym ( x H r ) M − xx H + M − xr H ( I − xx H ) + M − ( I − xx H ) rx H , if M A ∈ Herm
Then E ∈ S is a unique matrix such that ( A + E ) x = λx and k E k F = η S F ( A, x, λ ) . Further,when k r k = | x H r | , define △ A := E − x T rM − ( I − xx H ) T rr T ( I − xx H ) k r k − | x H r | , if M A ∈ sym E, if M A ∈ skew - sym E − x H rM − ( I − xx H ) rr H ( I − xx H ) k r k − | x H r | , if M A ∈ Herm else set △ A := E. Then △ A ∈ S such that ( A + △ A ) x = λx and k△ A k = η S ( A, x, λ ) . Finally, yet another related problem which arises when dealing with backward errors ofapproximate invariant subspaces as well as in inverse eigenvalue problem with a specifiedinvariant subspace is as follows.
Problem-III.
Let X be a p -dimensional subspace of K n and A ∈ S . Then find a structuredmatrix E ∈ S , if it exists, such that E X ⊂ X and k A − E k = ω S ( A, X ) , where ω S ( A, X ) := inf {k E − A k : E ∈ S and E X ⊂ X } . Let U ∈ K n × p be an isometry such that span( U ) = X . If E ∈ S is such that E X ⊂ X then EU = U D for some p -by- p matrix D. Then setting △ A := E − A, we have △ AU = U D − AU and hence by Theorem 3.3, ω S ( A, X ) = k AU − U D k for the spectral norm. The choice of D that minimizes k AU − U D k is given by the following result. Proposition 4.2
Let U ∈ K n × p be an isometry and A ∈ K n × n . Set P := U U H . Then forthe spectral and the Frobenius norms, we have min D k AU − U D k = k AU − U ( U H AU ) k = k ( I − P ) AP k , where the minimum is taken over K p × p . Further, if A is Hermitian (resp., skew-Hermitian)then so is the minimizer D. On the other hand, if A is symmetric (resp., skew-symmetric)then so is the minimizer D provided U is real. roof: Let Z := [ U, V ] be unitary. Then for the spectral and the Frobenius norms, we have k AU − U D k = k Z H ( AU − U D ) k = k (cid:20) U H AU − DV H AU (cid:21) k ≥ k V H AU k and the equality holds for D = U H AU.
Now k V H AU k = k V V H AU U H U k = k ( I − P ) AP k gives the desired minimum. That D inherits the structure of A when A is Hermitian orskew-Hermitian, and under the additional assumption of U being real when A is symmetricor skew-symmetric, is immediate. (cid:4) . Thus, for a solution of
Problem III , we choose an isometry U whose columns form anorthonormal basis of X and set D := U H AU.
Then
Problem-III reduces to
Problem-II for the pair (
U, D ) . However, since U is an isometry and D = U H AU, existence criterionas well as solutions of
Problem III have simpler forms. Indeed, when M S = Herm (resp., M S = skew - Herm ), we have U H M ( I − P ) AU is Hermitian (resp., skew-Hermitian) and henceby Theorem 3.1, Problem-III has a solution. On the other hand, when M S = sym (resp., M S = skew - sym ), we have U T M ( I − P ) AU ) is symmetric (resp., skew-symmetric) provided U is real. Hence by Theorem 3.1, Problem-III has a solution. We mention once againthat
Problem-III has a solution for the case when M S ∈ { sym , skew - sym } provided thatthe orthogonal projection P is real. Thus, in either case, if E ∈ S and E X ⊂ X then byTheorem 3.3, we have E = (cid:26) A + M − F H ( U, M ( I − P ) AU ) + M − ( I − P ) Z ( I − P ) , if M S ∈ { Herm , skew - Herm } A + M − F T ( U, M ( I − P ) AU ) + M − ( I − P ) Z ( I − P ) , if M S ∈ { sym , skew - sym } where Z ∈ M S is arbitrary. Further, we have the following result from Theorem 3.3. Theorem 4.3
Let X be a p -dimensional subspace of K n and A ∈ S . Let P be the orthogonalprojection on X given by P := U U ∗ . Define E o := A + M − F H ( U, M ( I − P ) AU ) when M S ∈{ Herm , skew - Herm } , and E o := A + M − F T ( U, M ( I − P ) AU ) when M S ∈ { sym , skew - sym } and U is real. Then E o X ⊂ X . Further, ω S ( A, X ) = k A − E o k = k ( I − P ) AP k for thespectral norm, and ω S ( A, X ) = k A − E o k F = q k ( I − P ) AP k F − k P M ( I − P ) AP k F for the Frobenius norm.In particular, when S ∈ { Herm , skew - Herm , sym , skew - sym } we have E o = P AP + ( I − P ) A ( I − P ) and ω S ( A, X ) = k A − E o k = k ( I − P ) AP k for the spectral norm, and ω S ( A, X ) = k A − E o k F = √ k ( I − P ) AP k F for the Frobenius norm. We mention that E o is a unique solution of Problem-III for the Frobenius. By contrast,
Problem-III has infinitely many solutions for the spectral norm, which are of the form E o + G for appropriate G ∈ S as given in Theorem 3.3. We also mention that for the specialcase when, for example, S = Herm , the results in Theorem 4.3 can be obtained easily withoutresorting to Theorem 3.3. Indeed, if E ∈ Herm and E X ⊂ X then ( I − P ) EP = 0 = P E ( I − P )which gives E = P EP + ( I − P ) E ( I − P ) or in (operator) matrix notation, we have E = (cid:20) P EP
00 ( I − P ) E ( I − P ) (cid:21) . Since A = P AP + P A ( I − P ) + ( I − P ) AP + ( I − P ) A ( I − P ) or in matrix notation A = (cid:20) P AP P A ( I − P )( I − P ) AP ( I − P ) A ( I − P ) (cid:21) , it follows that k E − A k is minimized for the spectral norm as well as for the Frobenius normwhen E = P AP + ( I − P ) A ( I − P ) and the minimum is given by k ( I − P ) AP k for thespectral norm, and √ k ( I − P ) AP k F for the Frobenius norm.12 .1 Structured pseudospectra Let A ∈ S . Recall that η S ( A, x, λ ) is the structured backward error of ( λ, x ) ∈ C × C n as anapproximate eigenpair of A with the convention η S ( A, x, λ ) = ∞ when x and r := λx − Ax do not satisfy the condition for structured mapping. For the spectral norm, set η S ( λ, A ) := inf { η S ( A, x, λ ) : x ∈ C n and k x k = 1 } . Then η S ( λ, A ) is the structured backward error of λ as an approximate eigenvalue of A. Thebackward error η S ( λ, A ) can be used to define structured pseudospectrum Λ S ǫ ( A ) of A :Λ S ǫ ( A ) := [ k△ A k ≤ ǫ { Λ( A + △ A ) : △ A ∈ S } = { λ ∈ C : η S ( λ, A ) ≤ ǫ } . See [15] for more on pseudospectra and [5, 8, 12] for structured pseudospectra.We denote η S ( λ, A ) by η ( λ, A ) when S = C n × n . Then obviously η ( λ, A ) is the unstructuredbackward error of λ as an approximate eigenvalue of A and we have η ( λ, A ) = σ min ( A − λI ) , where σ min ( A ) is the smallest singular value of A. In contrast, for many important structuresdetermination of η S ( λ, A ) is a hard optimization problem and hence computation of Λ S ǫ ( A ) isa challenging task [8]. Nevertheless, for certain structures it turns out that η S ( λ, A ) = η ( λ, A )holds for all λ ∈ C , and for some other structures the equality holds only for certain λ ∈ C . We denote Λ S ǫ ( A ) by Λ ǫ ( A ) , the unstructured pseudospectrum of A, when S = C n × n . Theorem 4.4
Let S := J when M T = M, and S ∈ { J , L } when M T = − M. Let A ∈ S . Thenfor λ ∈ C , there exists △ A ∈ S such that λ ∈ Λ( A + △ A ) and η S ( λ, A ) = k△ A k = η ( λ, A ) . Consequently, we have Λ S ǫ ( A ) = Λ ǫ ( A ) . Proof:
Consider the case S := J when M T = M. Then
M A ∈ sym . Let λ ∈ C . It follows that A − λI ∈ S , that is, ( M ( A − λI )) T = M ( A − λI ) . Since M ( A − λI ) is complex symmetricthere exists a unitary matrix U such that the symmetric Takagi factorization [7] M ( A − λI ) = U Σ U T holds, where Σ is a diagonal matrix containing singular values of M ( A − λI )ordered in descending order of magnitude. Note that η ( λ, A ) = σ min ( A − λI ) = σ min ( M ( A − λI )) = Σ( n, n ) . Let u := U (: , n ) . Then we have M ( A − λI ) u = η ( λ, A ) u. This gives ( A − η ( λ, A ) M − uu T ) u = λu. Setting △ A := − η ( λ, A ) M − uu T , we have △ A ∈ S and k△ A k = η ( λ, A ) = η S ( λ, A ). Hence the results follow.Next, consider the case S ∈ { J , L } when M T = − M. Then
M A ∈ { skew - sym , sym } . Firstconsider
M A ∈ skew - sym . For λ ∈ C , we have M ( A − λI ) ∈ S , that is, ( M ( A − λI )) T = − M ( A − λI ) . Since M ( A − λI ) is complex skew-symmetric, we have the skew-symmetricTakagi factorization [7] M ( A − λI ) = U diag( d , · · · , d m ) U T , where U is unitary, d j := (cid:20) s j − s j (cid:21) , s j ∈ C is nonzero and | s j | are singular values of M ( A − λI ) . Here the blocks d j appear in descending order of magnitude of | s j | . Note that M ( A − λI ) U = U diag( d , · · · , d m ) . Let u := U (: , n − n ) . Then M ( A − λI ) u = ud m = ud m u T u. This gives (
M A − ud m u T ) u = λM u. Hence taking △ A := − M − ud m u T , we have λ ∈ Λ( A + △ A ) , △ A ∈ S and k△ A k = | s m | = σ min ( M ( A − λI )) = σ min ( A − λI ) = η ( λ, A ) . Hence η S ( λ, A ) = η ( λ, A ) and the desired result follows.Finally, consider the case M T = − M and M A ∈ sym . Let λ ∈ C . Note that σ min ( M ( A − λI )) = σ min ( A − λI ) = η ( λ, A ) . Let u and v be unit left and right singular vector of M ( A − λI )corresponding to η ( λ, A ) . Then M ( A − λI ) v = η ( λ, A ) u. This gives Av − η ( λ, A ) M − u = λv. Let E ∈ C n × n be such that E = E T , Ev = u and k E k = 1 . Such a matrix always exists(Theorem 3.3). Then setting △ A := − η ( λ, A ) M − E, we have ( A + △ A ) v = λv, △ A ∈ S and k△ A k = η ( λ, A ) = η S ( λ, A ) . Hence the result follows. (cid:4) M, we havepartial equality between structured and unstructured pseudospectra. Theorem 4.5
Let S := J when M H = ± M. Let A ∈ S . Then for λ ∈ R , there exists △ A ∈ S such that λ ∈ Λ( A + △ A ) and η S ( λ, A ) = k△ A k = η ( λ, A ) . Consequently, we have Λ S ǫ ( A ) ∩ R = Λ ǫ ( A ) ∩ R . Next, consider S := L when M H = ± M. Let A ∈ S . Then for λ ∈ i R , there exists △ A ∈ S such that λ ∈ Λ( A + △ A ) and η S ( λ, A ) = k△ A k = η ( λ, A ) . Consequently, we have Λ S ǫ ( A ) ∩ i R = Λ ǫ ( A ) ∩ i R . Proof:
Consider the case S := J when M H = ± M. Then
M A ∈ Herm when M = M H and M A ∈ skew - Herm and M H = − M. First consider
M A ∈ Herm when M = M H . Now for λ ∈ R , M ( A − λI ) ∈ S . Since M ( A − λI ) is Hermitian, we have the spectral decomposition M ( A − λI ) = U diag( µ , · · · , µ n ) U H , where U is unitary and µ j ’s appear in descending order oftheir magnitudes. Note that | µ n | = σ min ( M ( A − λI )) = σ min ( A − λI ) = η ( λ, A ) . Now defining △ A := − µ n M − U (: , n ) U (: , n ) H , we have λ ∈ Λ( A + △ A ) , △ A ∈ S and k△ A k = η ( λ, A ) = η S ( λ, A ) . Hence the result follows. The proof is similar for the case when
M A ∈ skew - Herm and M H = − M. Finally, consider the case S := L when M H = ± M. Then
M A ∈ skew - Herm when M H = M and M A ∈ Herm when M H = − M. First consider
M A ∈ skew - Herm when M H = M. Thenfor λ ∈ i R , the set of purely imaginary numbers, we have M ( A − λI ) ∈ S , that is, M ( A − λI )is skew Hermitian. Hence the result follows from spectral decomposition of M ( A − λI ) . Theproof is similar for the case when
M A ∈ Herm and M H = − M. (cid:4) Conclusion.
We have provided a complete solution of the structured mapping problem(Theorem 3.3) for certain classes of structured matrices. More specifically, given a pair ofmatrices X and B in K n × p and a class S ⊂ K n × n of structured matrices, we have provideda complete characterization of structured solutions of the matrix equation AX = B with A ∈ S for the case when S is either a Jordan algebra or a Lie algebra associated with anorthosymmetric scalar product on K n . We have determined all optimal solutions in S , thatis, structured solutions which have the smallest norm. We have shown that optimal solutionis unique for the Frobenius norm and that there are infinitely many optimal solutions for thespectral norm. We have shown that the results in [9] obtained for a pair of vectors followas special cases of our general results. Finally, as an application of the structured mappingproblem, we have analyzed structured backward errors of approximate invariant subspaces,approximate eigenpairs, and structured pseudospectra of structured matrices. References [1]
B. Adhikari,
Backward perturbation and sensitivity analysis of structured polyno-mial eigenvalue problem,
PhD thesis, Department of Mathematics, Indian Institute ofTechnology Guwahati, India, 2008.[2]
R. Alam, S. Bora, M. Karow, V. Mehrmann, and J. Moro,
Perturbation theoryfor Hamiltonian matrices and the distance to bounded-realness,
SIAM J. Matrix Anal.Appl., 32(2011), pp.484 - 514.[3]
A. C. Antoulas,
Approximation of Large-Scale Dynamical Systems,
SIAM, Philadel-phia, 2005.[4]
C. Davis, W. M. Kahan and H.F. Weinberger,
Norm-preserving dialationsand their applications to optimal error bounds,
SIAM J. Numer. Anal., 19(1982),pp.445-469. 145]
S. Graillat,
A note on structured pseudospectra,
J. Comp. Appl. Math., 191(2006)pp. 68-76.[6]
S. Grivet-Talocia,
Passivity enforcement via perturbation of Hamiltonian matrices,
IEEE Trans. Circuits Syst. I. Regul. Pap., 51 (2004), pp. 1755 - 1769.[7]
R.A. Horn and C. R. Johnson,
Topics in Matrix Analysis,
Cambridge UniversityPress, reprinted paperback version ed., 1995.[8]
M. Karow, µ -values and spectral value sets for linear perturbation classes defined bya scalar product, SIAM. J. Matrix Anal. Appl., 32 (2011), pp. 845 - 865.[9]
D. S. Mackey, N. Mackey and F. Tisseur,
Structured mapping problems formatrices associated with scalar products, Part I: Lie and Jordan algebras,
SIAM J.Matrix Anal. Appl., 29(2008), pp.1389-1410.[10]
C. Khatri and S. K. Mitra,
Hermitian and nonnegative definite solutions of linearmatrix equations.,
SIAM J. Appl. Math., 31 (1976), pp. 579 - 585.[11]
J. Meinguet,
On the Davis-Kahan-Weinberger solution of the norm-preserving dila-tion problem,
Numer. Math., 49(1986), pp. 331-341.[12]
S. M. Rump,
Eigenvalues, pseudospectrum and structured perturbation,
Linear Alge-bra Appl., 413(2006), pp.567-593.[13]
J. Sun,
Backward perturbation analysis of certain characteristic subspaces,
Numer.Math., 65 (1993), pp. 357-382.[14]
F. Tisseur,
A chart of backward errors and condition numbers for singly and doublystructured eigenvalue problems,
SIAM J. Matrix Anal. Appl., 24 (2003), pp. 877 - 897.[15]
L. N. Trefethen and M. Embree