A pointwise weak-majorization inequality for linear maps over Euclidean Jordan algebras
aa r X i v : . [ m a t h . F A ] A ug A pointwise weak-ma jorization inequalityfor linear maps overEuclidean Jordan algebras
M. Seetharama GowdaDepartment of Mathematics and StatisticsUniversity of Maryland Baltimore CountyBaltimore, Maryland 21250, USAandJ. JeongApplied Algebra and Optimization Research CenterSungkyunkwan University2066 Seobu-ro, Suwon 16419, Republic of [email protected] 18, 2020
Abstract
Given a linear map T on a Euclidean Jordan algebra of rank n , we consider the setof all nonnegative vectors q in R n with decreasing components that satisfy the pointwiseweak-majorization inequality λ ( | T ( x ) | ) ≺ w q ∗ λ ( | x | ), where λ is the eigenvalue map and ∗ denotes the componentwise product in R n . With respect to the weak-majorization ordering,we show the existence of the least vector in this set. When T is a positive map, the leastvector is shown to be the join (in the weak-majorization order) of eigenvalue vectors of T ( e )and T ∗ ( e ), where e is the unit element of the algebra. These results are analogous to theresults of Bapat [4], proved in the setting of the space of all n × n complex matrices withsingular value map in place of the eigenvalue map. They also extend two recent results ofTao, Jeong, and Gowda [21] proved for quadratic representations and Schur product inducedtransformations. As an application, we provide an estimate on the norm of a general linearmap relative to spectral norms. Key Words:
Euclidean Jordan algebra, eigenvalue map, weak-majorization ordering, positivemap, spectral norm
AMS Subject Classification:
Introduction
This paper deals with a pointwise weak-majorization inequality for an arbitrary linear map on aEuclidean Jordan algebra. Our motivation comes from several sources. In a 1991 paper, Bapat[4] proves the following result for a linear map T on the space M n of all n × n complex matrices: There is a unique nonnegative vector η ( T ) with decreasing components such that s ( T ( X )) ≺ w η ( T ) ∗ s ( X ) for all X ∈ M n , with the additional property that if the above inequality holds for some q in place of η ( T ) , then η ( T ) ≺ w q . Here, s ( X ) denotes the vector of singular values of X in M n written in the decreasingorder, ∗ denotes the componentwise product in R n , and ≺ w is the weak-majorization preorderingrelation on R n . Specializing the above result, Bapat proves that if T is positive (meaning thatit takes positive semidefinite matrices to positive semidefinite matrices), then, η ( T ) is the joinof the singular value vectors of T ( I ) and T ∗ ( I ) relative to the weak-majorization preordering ,where I denotes the identity matrix.Subsequently, in a 1999 paper, Niezgoda [20] studied a generalization in the setting of groupmajorization (Eaton triples) and described equivalent formulations for positive linear maps, seeTheorem 3.1 and Examples 4.1 and 4.2 in [20].Going in a different direction, in a recent paper, Tao, Jeong, and Gowda [21] proved, in thesetting of Euclidean Jordan algebras, three weak-majorization inequalities. To elaborate, let( V , ◦ , h· , ·i ) denote a Euclidean Jordan algebra of rank n with unit e and carrying the trace innerproduct. For a ∈ V , consider the Lyapunov transformation L a and the quadratic representation P a defined, respectively, by L a ( x ) := a ◦ x and P a ( x ) := 2 a ◦ ( a ◦ x ) − a ◦ x ( x ∈ V ) . Also, for any n × n real symmetric positive semidefinite matrix A and a fixed Jordan frame in V , consider the corresponding Schur product induced linear transformation D A : V → V definedby D A ( x ) = A • x . For any x ∈ V , let λ ( x ) denote the vector of eigenvalues of x written in thedecreasing order. In [21], the following results were proved. • λ ( | L a ( x ) | ) ≺ w λ ( | a | ) ∗ λ ( | x | ) for all x ∈ V . • λ ( | P a ( x ) | ) ≺ w λ ( a ) ∗ λ ( | x | ) for all x ∈ V . • λ ( | D A ( x ) | ) ≺ w λ ( | diag A | ) ∗ λ ( | x | ) for all x ∈ V . In the above, the transformations L a , P a , and D A are all self-adjoint and the last two are positive, that is, they keep the symmetric cone V + of V invariant. Writing T for any one of the thesetransformations, above statements could be written in a unified way: λ ( | T ( x ) | ) ≺ w λ ( | T ( e ) | ) ∗ λ ( | x | ) for all x ∈ V . C ∗ -algebra M n and the Euclidean Jordan algebra H n of all n × n complex Hermitian matrices, lack of matrix type multiplication and associativeproperties in a general Euclidean Jordan algebra hinder routine/obvious generalizations of resultsand proofs. Yet, with powerful and elegant Euclidean Jordan algebra machinery we show thefollowing results: Given any linear map T : V → V , there exists a unique nonnegative vector η ( T ) in R n with decreasing components such that λ ( | T ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | ) for all x ∈ V , (1) with the additional property that if the above inequality holds with q in place of η ( T ) , then η ( T ) ≺ w q . Furthermore, if T is positive, then η ( T ) is the join of eigenvalue vectors of T ( e ) and T ∗ ( e ) in the weak-majorization preordering on R n . In particular, when T is positive andself-adjoint, we have λ ( | T ( x ) | ) ≺ w λ ( T ( e )) ∗ λ ( | x | ) for all x ∈ V . We note that this last statementrecovers the two results of Tao et al., stated for P a and D A (proved in [21] by different techniques).Now, constructing some q that satisfies the pointwise inequality λ ( | T ( x ) | ) ≺ w q ∗ λ ( | x | ) is easy:One can take a large positive multiple of the vector of ones in R n . Demonstrating the existenceof ‘least’ q (which implies uniqueness) and describing this q for a positive map requires moreand nontrivial work. In our analysis, three key results from Euclidean Jordan algebras are used.The first one is the ‘Fan-Theobald-von Neumann inequality’ [18, 1, 13]: h x, y i ≤ h λ ( x ) , λ ( y ) i ( x, y ∈ V ) . (2)The second one is the ‘variational principle’ [1]: S k ( x ) := λ ( x ) + λ ( x ) + · · · + λ k ( x ) = max c ∈ I ( k ) h x, c i , (3)where I ( k ) denotes the set of all idempotents of rank k in V . The third key result is a weak-majorization inequality ([21], Lemma 4.2): If ε ∈ V with ε = e and x ∈ V , then λ ( | x ◦ ε | ) ≺ w λ ( | x | ) ∗ λ ( | ε | ) . (4)Now, observing the commonality of results in both M n and Euclidean Jordan algebras, onemight wonder if there is a general framework/setting where a unified result could be obtained.We mention Eaton triples [5] (equivalently, normal decomposition systems [17] which includesboth M n and simple Euclidean Jordan algebras) or more generally, Fan-Theobald-von Neumannsystems [9] (which include Eaton triples and all Euclidean Jordan algebras) as possible candidatesfor such a unified approach. The work of Niezgoda [20] mentioned earlier may be taken as astarting point in this regard.Here is an outline of the paper. In Section 2, we cover some preliminaries. Section 3 deals withour main result (1) for linear maps and a consequence for positive ones. It also includes a resulton majorization: There exists q ∈ R n with decreasing components such that λ ( T ( x )) ≺ q ∗ λ ( x )3 or all x ∈ V if and only if T is a scalar multiple of a doubly stochastic map. In Section 4, wedescribe some properties of the nonlinear map η defined on the set of all linear maps on V . Asan application, we provide an estimate on the norm of a linear map relative to spectral norms.Examples are provided in Section 5 and a few open problems are mentioned in Section 6. Throughout this paper, we let R n denote the real Euclidean n -space with the usual inner productand R n + denote the set of all nonnegative vectors in R n . We say that a vector q = ( q , q , . . . , q n )in R n has decreasing components or that its components are written in the decreasing order if q ≥ q ≥ · · · ≥ q n . We let( R n + ) ↓ = (cid:8) q = ( q , q , . . . , q n ) ∈ R n + : q ≥ q ≥ · · · ≥ q n (cid:9) . For p, q ∈ R n , we write p ≥ q if p − q ∈ R n + and let p ∗ q denote their componentwise product.We write k for the vector in R n with 1s in the first k slots, and 0s elsewhere; denotes thevector of ones in R n . Given p = ( p , p , . . . , p n ) ∈ R n , we write | p | := ( | p | , | p | , . . . , | p n | ) for thevector of absolute values and p ↓ := ( p ↓ , p ↓ , . . . , p ↓ n ) for the decreasing rearrangement of p ; thelatter is the vector obtained by rearranging the entries of p in a decreasing order. We note the Hardy-Littlewood-P´olya rearrangement inequality h p, q i ≤ h p ↓ , q ↓ i and, as a consequence, h p, q i ≤ h| p | , | q |i ≤ h| p | ↓ , | q | ↓ i ( p, q ∈ R n ) . (5)Given two vectors p and q in R n , we say that p is weakly majorized by q and write p ≺ w q if P ki =1 p ↓ i ≤ P ki =1 q ↓ i for all indices k , 1 ≤ k ≤ n . If, in addition, P ni =1 p ↓ i = P ni =1 q ↓ i , we say that p is majorized by q and write p ≺ q . For any p ∈ R n and index k ∈ { , , . . . , n } , S k ( p ) denotesthe sum of k largest components of p , that is, S k ( p ) := P ki =1 p ↓ i . We will use the following result([2], Problem II.5.16) in R n : (cid:2) r ≥ p ≺ w q (cid:3) ⇒ r ↓ ∗ p ↓ ≺ w r ↓ ∗ q ↓ ⇒ h r ↓ , p ↓ i ≤ h r ↓ , q ↓ i . (6)Consider the ordering relation on R n induced by weak-majorization. While this relation ismerely reflexive and transitive on R n , it becomes antisymmetric on ( R n + ) ↓ ; thus, it is a partialorder on ( R n + ) ↓ . In this regard, the following result of Bapat is useful. Proposition 2.1. (Bapat [4], Lemma 3 and Corollary 4)(a) Let Q be a nonempty subset of R n + . Then, there is a unique q ∗ ∈ ( R n + ) ↓ such that q ∗ ≺ w q forall q ∈ Q and if p ∈ R n + with p ≺ w q for all q ∈ Q , then p ≺ w q ∗ . We write w-inf ( Q ) := q ∗ .(b) Suppose S is a nonempty bounded subset of R n + . Then there is a unique p ∗ ∈ ( R n + ) ↓ such that s ≺ w p ∗ for all s ∈ S and if p ∈ R n + with s ≺ w p for all s ∈ S , then p ∗ ≺ w p .We write w-sup ( S ) := p ∗ . When S = { r, s } , the ‘join’ of r and s is defined/denoted by r ∨ w s := w-sup ( S ) .
4n Item ( a ) above, q ∗ is constructed as follows. For 1 ≤ k ≤ n , let β k := inf q ∈ Q S k ( q )and r k := β k − β k − , where β := 0. Then q ∗ := ( r , r , . . . , r n ). Also, in Item ( b ), for a nonempty bounded subset S of R n + , one defines Q := { q : s ≺ w q for all s ∈ S } and p ∗ := w-inf ( Q ) . Note that when r, s ∈ ( R n + ) ↓ , r ∨ w s ≺ w max { r, s } , where max { r, s } is the componentwise maximum of r and s .Throughout, we let ( V , ◦ , h· , ·i ) denote a Euclidean Jordan algebra of rank n with unit element e [6, 11]; the Jordan product and inner product of elements x and y in V are, respectively, denotedby x ◦ y and h x, y i . We note (one of the defining properties of a Euclidean Jordan algebra): h x ◦ y, z i = h x, y ◦ z i , for all x, y, z ∈ V . It is well known [6] that any Euclidean Jordan algebra is a direct product/sum of simple Eu-clidean Jordan algebras and every simple Euclidean Jordan algebra is isomorphic to one of fivealgebras, three of which are the algebras of n × n real/complex/quaternion Hermitian matrices.The other two are: the algebra of 3 × spectral decomposition theorem [6], every element x ∈ V has a decomposition x = x e + x e + · · · + x n e n , where the real numbers x , x , . . . , x n are (called) the eigenvaluesof x and { e , e , . . . , e n } is a Jordan frame in V . (An element may have decompositions comingfrom different Jordan frames, but the eigenvalues remain the same.) The trace of x is definedby tr( x ) := x + x + · · · + x n . It is known that ( x, y ) tr( x ◦ y ) defines another inner producton V that is compatible with the given Jordan product. Throughout this paper, we assume thatthe inner product on V is this trace inner product, that is, h x, y i = tr( x ◦ y ) . The rank of an element x is the number of nonzero eigenvalues of x . We use the notation x ≥ x >
0) when all the eigenvalues of x are nonnegative (respectively, positive) and x ≥ y (or, y ≤ x ) when x − y ≥
0, etc. We let V + := { x ∈ V : x ≥ } denote the symmetric cone of V . It is known that V + is a self-dual (closed convex) cone.Given a function φ : R → R , the corresponding
L¨owner map (still denoted by φ ) is defined viaspectral decomposition: If x = P ni =1 x i e i , then φ ( x ) := P ni =1 φ ( x i ) e i . In particular, for x = P ni =1 x i e i , we define | x | := P ni =1 | x i | e i and x + := P ni =1 x + i e i . By writing | x i | = x i ε i , where ε i = 1 when x i ≥ ε i = − x i <
0, we see that | x | = x ◦ ε , where5 = e (in fact, ε := ε e + ε e + · · · + ε n e n ).For any x ∈ V , λ ( x ) – called the eigenvalue vector of x – is the vector of eigenvalues of x writtenin the decreasing order. We write λ ( x ) = (cid:16) λ ( x ) , λ ( x ) , . . . , λ n ( x ) (cid:17) and note λ ( x ) ≥ λ ( x ) ≥ · · · ≥ λ n ( x ). It is known that λ : V → R n is continuous [1]. We define weak-majorization and majorization in V by: x ≺ w y in V if and only if λ ( x ) ≺ w λ ( y ) in R n and x ≺ y in V if and only if λ ( x ) ≺ λ ( y ) in R n . The following implication is a consequenceof the well-known Hirzebruch’s max-min theorem [13]: x ≤ y ⇒ λ ( x ) ≤ λ ( y ) . In particular, we have λ ( x ) ≤ λ ( | x | ) for all x ∈ V .As | λ ( x ) | ↓ = λ ( | x | ), combining (2) and (5), we get the following useful inequality: h x, y i ≤ h λ ( | x | ) , λ ( | y | ) i ( x, y ∈ V ) . (7)An element c ∈ V is an idempotent if c = c ; it is said to be a primitive idempotent if it is nonzeroand cannot be written as the sum of two other nonzero idempotents. By spectral decompositiontheorem, corresponding to any nonzero idempotent c , there is a Jordan frame { e , e , . . . , e n } such that c = e + e + · · · + e k for some k , 1 ≤ k ≤ n . (Here, e , e , . . . , e n are mutuallyorthogonal primitive idempotents); hence, the rank of such an idempotent is k . Let I ( k ) denotethe set of all idempotents of rank k , 1 ≤ k ≤ n . We let I = n [ k =1 I ( k ) = Set of all nonzero idempotents . As V carries the trace inner product, the norm of any primitive idempotent is one and sotr( c ) = k for each c ∈ I ( k ) . As the set of all primitive idempotents is compact ([6], page 78), itfollows easily that I ( k ) is compact. Hence, I is also compact.In what follows, we use the letter ε for an element in V with ε = e . Such an element will haveeigenvalues ±
1. We let E := { ε ∈ V : ε = e } and I ◦ E := { c ◦ ε : c ∈ I , ε ∈ E} . For any z in V or in R n , S k ( z ) denotes the sum of the first k largest eigenvalues of z . (Note that R n is a Euclidean Jordan algebra in which the components of any vector are its eigenvalues, soour notation is consistent.)We let L ( V ) denote the space of all (continuous) linear maps over V . For T ∈ L ( V ), we say that T is 6a) positive if T ( V + ) ⊆ V + ;(b) doubly substochastic if T is positive, T ( e ) ≤ e and T ∗ ( e ) ≤ e , where T ∗ denotes the adjointof T ;(c) doubly stochastic if T is positive, T ( e ) = e and T ∗ ( e ) = e ;(d) an algebra automorphism if T is invertible and T ( x ◦ y ) = T ( x ) ◦ T ( y ) for all x, y ∈ V ;(e) a cone automorphism , if T ( V + ) = V + .We respectively write DSS( V ), DS( V ), Aut( V ), and Aut( V + ) for the set of all doubly substochas-tic maps, doubly stochastic maps, algebra automorphisms, and cone automorphisms on V . Thefollowing results are known: • conv(Aut( V )) ⊆ DS ( V ), where ‘conv’ stands for the convex hull, see [8]. • T is doubly substochastic if and only if λ ( T ( x )) ≺ w λ ( x ) for all x ≥ V , see [16], Theorem3.3. • T is doubly stochastic if and only if λ ( T ( x )) ≺ λ ( x ) for all x in V , see [15], Lemma 2. In this section, we establish our main result and state a consequence for positive maps. We willalso consider a majorization result.
Theorem 3.1.
Let T : V → V be a linear map. Then, there exists a unique η ( T ) in ( R n + ) ↓ suchthat λ ( | T ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | ) for all x ∈ V , (8) with the additional property that if the above statement holds with q in place of η ( T ) , then η ( T ) ≺ w q . Before the proof, we cover some preliminary results. Let T : V → V be an arbitrary (but fixed)linear map. Correspondingly, we define the following sets in ( R n + ) ↓ : Q := (cid:8) q ∈ ( R n + ) ↓ : λ ( | T ( x ) | ) ≺ w q ∗ λ ( | x | ) for all x ∈ V (cid:9) , (9)Λ := (cid:8) λ ( | T ( c ◦ ε ) | ) : c ◦ ε ∈ I ◦ E (cid:9) , (10)Λ ∗ := (cid:8) λ ( | T ∗ ( c ◦ ε ) | ) : c ◦ ε ∈ I ◦ E (cid:9) , (11) S := Λ ∪ Λ ∗ . (12)We will show below that Q is nonempty. Hence, by Proposition 2.1, w-inf ( Q ) is defined. Also,by the compactness of I ◦ E and the continuity of T and λ , it follows that Λ and Λ ∗ are bothcompact. Hence, w-sup (Λ) and w-sup (Λ ∗ ) are defined. As S is the union of Λ and Λ ∗ , we havew-sup ( S ) = w-sup (Λ) ∨ w w-sup (Λ ∗ ) .
7n the following result, we describe the set Q in different, but equivalent ways. Lemma 3.2.
Let T be a linear map on V and q ∈ ( R n + ) ↓ . Then, the following are equivalent:(i) λ ( | T ( x ) | ) ≺ w q ∗ λ ( | x | ) for all x ∈ V . (ii) λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c ◦ ε | ) for all c ◦ ε ∈ I ◦ E . (iii) λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c | ) for all c ◦ ε ∈ I ◦ E . (iv) λ ( | T ∗ ( x ) | ) ≺ w q ∗ λ ( | x | ) for all x ∈ V . (v) λ ( | T ∗ ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c ◦ ε | ) for all c ◦ ε ∈ I ◦ E .(vi) λ ( | T ∗ ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c | ) for all c ◦ ε ∈ I ◦ E . Proof. ( i ) ⇒ ( ii ): This follows by specializing x to c ◦ ε .( ii ) ⇒ ( iii ): Suppose ( ii ) holds so that λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c ◦ ε | ) for all c ◦ ε ∈ I ◦ E . As λ ( | ε | ) = , in view of (4) and (6), this simplifies to λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( | c | ) for all c ◦ ε ∈ I ◦ E , which is ( iii ).( iii ) ⇒ ( iv ): We assume ( iii ). To see ( iv ), we have to show that for each index k , 1 ≤ k ≤ n ,and x ∈ V , S k ( | T ∗ ( x ) | ) ≤ S k ( q ∗ λ ( | x | )) . Fix k and x , and let c ∈ I ( k ) be arbitrary. Then, writing | T ∗ ( x ) | = T ∗ ( x ) ◦ ε for some ε ∈ E , wehave h| T ∗ ( x ) | , c i = h T ∗ ( x ) ◦ ε, c i = h T ∗ ( x ) , c ◦ ε i = h x, T ( c ◦ ε ) i≤ h λ ( | x | ) , λ ( | T ( c ◦ ε ) | ) i≤ h λ ( | x | ) , q ∗ λ ( | c | ) i = h q ∗ λ ( | x | ) , λ ( | c | ) i , where the first inequality is due to (7), and the second one is due to condition ( iii ) coupled with(6). Then, as λ ( | c | ) = λ ( c ) = k , we have h| T ∗ ( x ) | , c i ≤ h q ∗ λ ( | x | ) , λ ( c ) i = k X i =1 ( q ∗ λ ( | x | )) i = S k ( q ∗ λ ( | x | )) . Now, taking the maximum over c ∈ I ( k ) and using (3), we get S k ( | T ∗ ( x ) | ) ≤ S k ( q ∗ λ ( | x | ) . Thisproves that λ ( | T ∗ ( x ) | ) ≺ w q ∗ λ ( | x | ) for all x ∈ V . ( iv ) ⇒ ( v ): This can be seen by specializing x to c ◦ ε .The implications ( v ) ⇒ ( vi ) and ( vi ) ⇒ ( i ) are seen by replacing T by T ∗ in the implications( ii ) ⇒ ( iii ) and ( iii ) ⇒ ( iv ). 8 emark 1. Recall that a linear map T is positive if T ( V + ) ⊆ V + . For such a map, it is known([21], Example 3.7) that λ ( | T ( x ) | ) ≺ w λ ( T ( | x | )) for all x ∈ V . (13)Using this and with appropriate modifications, we can simplify Lemma 3.2 and its proof asfollows (stated without proof): When T is positive and q ∈ ( R n + ) ↓ , the following are equivalent:(i) λ ( | T ( x ) | ) ≺ w q ∗ λ ( | x | ) for all x ∈ V . (ii) λ ( T ( x )) ≺ w q ∗ λ ( x ) for all x ≥ . (iii) λ ( T ( c )) ≺ w q ∗ λ ( c ) for all c ∈ I . (iv) λ ( T ∗ ( x )) ≺ w q ∗ λ ( x ) for all x ≥ . (v) λ ( T ∗ ( c )) ≺ w q ∗ λ ( c ) for all c ∈ I . In [20], Theorem 3.1, Niezgoda proves a result for certain types of linear maps in the settingof Eaton triples. When specialized, it will yield a result of the above type for positive linearmaps on simple
Euclidean Jordan algebras. (Note: So far, it is only known that every simpleEuclidean Jordan algebra is a normal decomposition system, equivalently, an Eaton triple [18].)
Lemma 3.3.
Given a linear map T : V → V , consider the sets Q and S defined in (9)-(12).Then the following hold:(i) s ≺ w q for all s ∈ S and q ∈ Q .(ii) w-sup ( S ) ∈ Q .(iii) w-sup ( S ) = w-inf ( Q ) . Proof.
We first observe that Q is nonempty. This can be seen by taking q = t , where t is alarge positive number, and using Item ( iii ) in Lemma 3.2 along with the compactness of Λ.( i ) Let q ∈ Q so that (by the previous lemma), λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( c ) and λ ( | T ∗ ( c ◦ ε ) | ) ≺ w q ∗ λ ( c )for all c ◦ ε ∈ I ◦ E . Let s ∈ S = Λ ∪ Λ ∗ . If s ∈ Λ, then s = λ ( | T ( c ◦ ε ) | ) for some c ◦ ε ∈ I ◦ E ,in which case, λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( c ) ≺ w q, where the second inequality follows from the nonnegativity of q and the fact that λ ( c ) = k forsome index k . A similar statement ensues if s ∈ Λ ∗ . Hence, s ≺ w q . This proves ( i ). We also see,by Proposition 2.1 that w-sup ( S ) ≺ w w-inf ( Q ) . ( ii ) Let q := w-sup ( S ). To see q ∈ Q , it is enough to show by Lemma 3.2, λ ( | T ( c ◦ ε ) | ) ≺ w q ∗ λ ( c ) (14)9or all c ◦ ε ∈ I ◦ E . Consider an index k and c ∈ I ( k ) . Then (14) is equivalent to S l ( | T ( c ◦ ε ) | ) ≤ S min { k, l } ( q ) for all 1 ≤ l ≤ n. (15)Now, fix c ◦ ε ∈ I ◦ E , l ∈ { , , . . . , n } , and choose ε ′ ∈ E such that | T ( c ◦ ε ) | = T ( c ◦ ε ) ◦ ε ′ , andlet c ′ ∈ I ( l ) . If l ≤ k , then we have (cid:10) | T ( c ◦ ε ) | , c ′ (cid:11) = (cid:10) T ( c ◦ ε ) ◦ ε ′ , c ′ (cid:11) ≤ (cid:10) λ ( (cid:12)(cid:12) T ( c ◦ ε ) ◦ ε ′ (cid:12)(cid:12) ) , λ ( c ′ ) (cid:11) ≤ (cid:10) λ ( | T ( c ◦ ε ) | ) , λ ( c ′ ) (cid:11) ≤ (cid:10) q, λ ( c ′ ) (cid:11) = S l ( q ) , where the first inequality is due to (7), and the second inequality is due to (4) coupled with (6).The last inequality is due to λ ( | T ( c ◦ ε ) | ) ≺ w q as q = w-sup ( S ). By taking the supremum over c ′ ∈ I ( l ) , we get S l ( | T ( c ◦ ε ) | ) ≤ S l ( q ).On the other hand, if l > k , then (cid:10) | T ( c ◦ ε ) | , c ′ (cid:11) = (cid:10) T ( c ◦ ε ) ◦ ε ′ , c ′ (cid:11) = (cid:10) T ( c ◦ ε ) , c ′ ◦ ε ′ (cid:11) = (cid:10) c ◦ ε, T ∗ ( c ′ ◦ ε ′ ) (cid:11) = (cid:10) c, [ T ∗ ( c ′ ◦ ε ′ )] ◦ ε (cid:11) ≤ (cid:10) λ ( c ) , λ ( (cid:12)(cid:12) [ T ∗ ( c ′ ◦ ε ′ )] ◦ ε (cid:12)(cid:12) ) (cid:11) ≤ (cid:10) λ ( c ) , λ ( (cid:12)(cid:12) T ∗ ( c ′ ◦ ε ′ ) (cid:12)(cid:12) ) (cid:11) ≤ h λ ( c ) , q i = S k ( q ) , where the first inequality is due to (7), second one due to (4), and the last one is to theinequality λ ( | T ∗ ( c ′ ◦ ε ′ ) | ) ≺ w q = w-sup ( S ). Again, taking the supremum over c ′ ∈ I ( l ) we get S l ( | T ( c ◦ ε ) | ) ≤ S k ( q ). Hence, we have proved (15), so q = w-sup ( S ) ∈ Q . This proves ( ii ).( iii ) From ( ii ), q := w-sup ( S ) ∈ Q , hence, w-inf ( Q ) ≺ w q = w-sup ( S ). From ( i ), the reverseinequality holds. Since the weak-majorization ordering is antisymmetric on ( R n + ) ↓ , we have( iii ).We now come to the proof of our main theorem. Proof of Theorem 3.1.
Given T , we define Q and S as (9)-(12). Let η ( T ) := w-sup ( S ). Asthis belongs to Q , we see statement (8) in Theorem 3.1. The additional item follows from theequality η ( T ) = w-sup ( S ) = w-inf ( Q ).Generally, finding/describing η ( T ) may not be easy. However, when T is a positive map, wehave a simple expression for η ( T ). 10 orollary 3.4. Let T be a positive linear map on V . Then, w-sup (Λ) = λ ( T ( e )) and w-sup (Λ ∗ ) = λ ( T ∗ ( e )) . Hence, η ( T ) = λ (cid:0) T ( e ) (cid:1) ∨ w λ (cid:0) T ∗ ( e ) (cid:1) . In particular, if T is also self-adjoint, then η ( T ) = λ (cid:0) T ( e ) (cid:1) . Proof.
Consider c ◦ ε ∈ I ◦ E . Since λ ( | c ◦ ε | ) ≺ w λ ( c ) ≤ λ ( e ) = , the first component in λ ( | c ◦ ε | ) is less than or equal to 1. Hence all components in λ ( | c ◦ ε | ) are less than or equal toone. It follows (by considering the spectral decomposition) that | c ◦ ε | ≤ e . As T is positive, T ( | c ◦ ε | ) ≤ T ( e ) and so λ ( T ( | c ◦ ε | )) ≤ λ ( T ( e )). However, by (13), λ ( | T ( c ◦ ε ) | ) ≺ w λ ( T ( | c ◦ ε | )) . Hence, λ ( | T ( c ◦ ε ) | ) ≺ w λ ( T ( e )) . As c ◦ ε is arbitrary in I ◦ E , we see thatw-sup (Λ) ≺ w λ ( T ( e )) . But e ∈ I ◦ E , and so, λ ( T ( e )) ≺ w w-sup (Λ) . Thus,w-sup (Λ) = λ ( T ( e )) . Now, T ∗ is also positive (this is due to V + being a self-dual cone); hence, by above, w-sup (Λ ∗ ) = λ ( T ∗ ( e )). So, η ( T ) = w-sup (Λ ∪ Λ ∗ ) = λ ( T ( e )) ∨ w λ ( T ∗ ( e )) . We note a simple bound when T is positive: η ( T ) = λ ( T ( e )) ∨ w λ ( T ∗ ( e )) ≺ w max { λ ( T ( e )) , λ ( T ∗ ( e )) } . Remark 2.
In the last statement of the above corollary, the requirement that T be self-adjoint can be slightly relaxed. Suppose T = P Φ, where Φ is an algebra automorphism of V and P is positive and self-adjoint; see Example 4 for such a map. As V carries the traceinner product, Φ ∗ is also an algebra automorphism of V , hence preserves eigenvalues. So, λ (( T ∗ ( e )) = λ (Φ ∗ P ( e )) = λ ( P ( e )) = λ ( P (Φ( e ))) = λ ( T ( e )). Thus, when T = P Φ, η ( T ) = λ ( T ( e )) . We note that if T = Φ P , where Φ and P are as above, then, η ( T ) = λ ( T ∗ ( e )).Motivated by our main theorem, we ask if (8) has a majorization analog. The following resultprovides an answer.In what follows, we use the fact that ( − ∗ λ ( x )) ↓ = λ ( − x ) for all x ∈ V , and note that p ≺ q in R n is, by definition, equivalent to p ↓ ≺ q ↓ . 11 heorem 3.5. Let T be a linear map on V . Then, the following are equivalent:(i) There exists a vector q in R n with decreasing components such that λ ( T ( x )) ≺ q ∗ λ ( x ) forall x ∈ V .(ii) T is a scalar multiple of a doubly stochastic map. Proof. ( i ) ⇒ ( ii ): We assume that q in ( i ) is given by q = ( q , q , . . . , q n ). We fix a Jordanframe { f , f , . . . , f n } in V and let a := P ni =1 q i f i so that q = λ ( a ). Then ( i ) reads λ ( T ( x )) ≺ λ ( a ) ∗ λ ( x ) for all x ∈ V . This implies that P ni =1 λ i ( T ( x )) = P ni =1 λ i ( a ) λ i ( x ) for all x . Since V carries the trace innerproduct, P ni =1 λ i ( T ( x )) = h T ( x ) , e i = h x, T ∗ ( e ) i and so h x, T ∗ ( e ) i = h λ ( x ) , λ ( a ) i . Let b := T ∗ ( e ) so that h x, b i = h λ ( x ) , λ ( a ) i for all x ∈ V . (16)We claim that h λ ( x ) , λ ( b ) i = h λ ( x ) , λ ( a ) i for all x ∈ V . (17)To see this, fix any x ∈ V and consider the spectral decomposition b = P ni =1 λ i ( b ) e i , where { e , e , . . . , e n } is a Jordan frame. Corresponding to this Jordan frame and the given x , define y := n X i =1 λ i ( x ) e i . Then, applying (16) to y , we have h y, b i = h λ ( y ) , λ ( a ) i . As λ ( x ) = λ ( y ) and h y, b i = P ni =1 λ i ( x ) λ i ( b ) = h λ ( x ) , λ ( b ) i , we see that h λ ( x ) , λ ( b ) i = h λ ( x ) , λ ( a ) i . This proves our claim. By specializing λ ( x ) in (17) (for example, to k for any index k , 1 ≤ k ≤ n ), we get λ ( b ) = λ ( a ). But then, (16) leads to h x, b i = h λ ( x ) , λ ( b ) i for all x ∈ V . This means that every x in V ‘strongly operator commutes’ [9] with b . As shown in the Remarkbelow, this can happen if and only if b is a scalar multiple of e . Since λ ( b ) = λ ( a ), a must alsobe a scalar multiple of e (this can be seen via the spectral decomposition). Let a = t e for some t ∈ R so that q = t and λ ( T ( x )) ≺ t ∗ λ ( x ) for all x ∈ V . (18)12f t = 0, then λ ( T ( x )) ≺ x . Hence, T = 0, that is, T is a multiple of the Identitymap (which is doubly stochastic). When t is nonzero, we complete the proof by showing that D := t T is doubly stochastic. t >
0, because of (18), λ ( D ( x )) ≺ ∗ λ ( x ) for all x ; so D isdoubly stochastic. When t <
0, say t = − λ ( − T ( x )) = λ ( T ( − x )) ≺ q ∗ λ ( − x ) implies λ ( D ( x )) = λ ( − T ( x )) ≺ λ ( T ( − x )) ≺ ( − ∗ λ ( − x )) ↓ = λ ( x ) , that is, λ ( D ( x )) ≺ λ ( x ) for all x ∈ V . Now, T = tD says that T is a scalar multiple of D .( ii ) ⇒ ( i ). Suppose T = tD , where t ∈ R and D is a doubly stochastic map. Since λ ( D ( x )) ≺ ∗ λ ( x ) for all x , by scaling, λ ( T ( x )) ≺ q ∗ λ ( x ) for all x . (This scaling is obvious when t ≥ t <
0, say t = − λ ( T ( x )) = λ ( − D ( x )) = λ ( D ( − x )) ≺ λ ( − x ) = ( − ∗ λ ( x )) ↓ .) Putting q = t , we see that λ ( T ( x )) ≺ q ∗ λ ( x ) for all x ∈ V . Remark 3.
We show that if b strongly operator commutes [9] with every x ∈ V , that is, if h x, b i = h λ ( x ) , λ ( b ) i for all x ∈ V , then b is a scalar multiple of e . Suppose this condition holds.Then, putting x = − b , we have −|| b || = h− b, b i = h λ ( − b ) , λ ( b ) i . As V carries the trace innerproduct, || λ ( y ) || = || y || for all y ∈ V . Hence, || λ ( − b ) + λ ( b ) || = h λ ( − b ) + λ ( b ) , λ ( − b ) + λ ( b ) i = || b || − || b || + || b || = 0 . This implies that λ ( − b ) = − λ ( b ). As λ ( − b ) has components in the decreasing order and − λ ( b )has components in the increasing order, we see that λ ( b ) = λ n ( b ). This proves that all compo-nents in λ ( b ) are equal. Thus, b is a multiple of e . η In this section, we describe some properties of the map η : T η ( T ) from L ( V ) to ( R n + ) ↓ , where L ( V ) denotes the set of all (continuous) linear maps on V . First, some notation. For any x ∈ V ,we let || x || ∞ := || λ ( x ) || ∞ = max ≤ i ≤ n | λ i ( x ) | = λ ( | x | )and for T ∈ L ( V ), || T || ∞ := sup = x ∈V || T ( x ) || ∞ || x || ∞ . Theorem 4.1.
The following statements hold for
T, T , T ∈ L ( V ) and α ∈ R :(a) || η ( T ) || ∞ = max {|| T || ∞ , || T ∗ || ∞ } .(b) η ( α T ) = | α | η ( T ) .(c) η ( T T ) ≺ w η ( T ) ∗ η ( T ) .(d) η ( T + T ) ≺ w η ( T ) + η ( T ) .(e) η is continuous. f ) η is ‘isotonic’: If T ( x ) ≺ T ( x ) for all x ∈ V , then η ( T ) ≺ w η ( T ) . Proof. ( a ) Fix T . From the pointwise inequality, λ ( | T ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | ), we see that || T ( x ) || ∞ = λ ( | T ( x ) | ) ≤ ( η ( T )) λ ( | x | ) = || η ( T ) || ∞ || x || ∞ . It follows that || T || ∞ ≤ || η ( T ) || ∞ . By Lemma 3.2, λ ( | T ∗ ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | ). This yields, || T ∗ || ∞ ≤ || η ( T ) || ∞ . Hence, max {|| T || ∞ , || T ∗ || ∞ } ≤ || η ( T ) || ∞ . We now prove the reverse inequality. Let, for 1 ≤ k ≤ n , θ k := sup c ◦ ε ∈I◦E λ k ( | T ( c ◦ ε ) | ) and θ ∗ k := sup c ◦ ε ∈I◦E λ k ( | T ∗ ( c ◦ ε ) | ) . Let θ := ( θ , θ , . . . , θ n ) and θ ∗ := ( θ ∗ , θ ∗ , . . . , θ ∗ n ). As the components in any λ ( x ) are decreasing,we see that θ, θ ∗ ∈ ( R n + ) ↓ . Let ¯ q := max { θ, θ ∗ } . For the given T , we define the sets Q , Λ, Λ ∗ and S as in (9)-(12). Suppose s ∈ Λ so that s = λ ( | T ( c ◦ ε ) | ) for some c ◦ ε ∈ I ◦ E . Then, for any k , 1 ≤ k ≤ n , S k ( s ) = k X i =1 λ i ( | T ( c ◦ ε ) | ) ≤ k X i =1 θ i ≤ S k (¯ q ) . We have a similar statement when s ∈ Λ ∗ . Hence, s ≺ w ¯ q for all s ∈ S . This implies thatw-sup ( S ) ≺ w ¯ q . However, w-sup ( S ) = w-inf ( Q ) = η ( T ) and so, η ( T ) ≺ w ¯ q . Thus, η ( T ) ≺ w max { θ, θ ∗ } . This implies that || η ( T ) || ∞ = ( η ( T )) ≤ max { θ , θ ∗ } . However, θ = sup c ◦ ε ∈I◦E λ ( | T ( c ◦ ε ) | ) = sup c ◦ ε ∈I◦E || T ( c ◦ ε ) || ∞ ≤ || T || ∞ , where the inequality is due to || T ( c ◦ ε ) || ∞ ≤ || T || ∞ || c ◦ ε || ∞ = || T || ∞ λ ( | c ◦ ε | ) ≤ || T || ∞ λ ( | c | ) = || T || ∞ . Similarly, θ ∗ ≤ || T ∗ || ∞ . Hence, || η ( T ) || ∞ ≤ max {|| T || ∞ , || T ∗ || ∞ } . Since the reverse inequality has already been proved, we have Item ( a ).( b ) This is easy to see from the uniqueness part in the main theorem.14 c ) Let T , T ∈ L ( V ) . Then, λ ( | T T ( x ) | ) ≺ w η ( T ) ∗ λ ( | T ( x ) | ) ≺ w η ( T ) ∗ η ( T ) ∗ λ ( | x | ) for all x ∈ V , where we have used (6) in the second inequality. From the main theorem, we have η ( T T ) ≺ w η ( T ) ∗ η ( T ).( d ) Suppose a, b ∈ V . Writing | a + b | = ( a + b ) ◦ ε for some ε ∈ E , we have λ ( | a + b | ) = λ (cid:16) ( a + b ) ◦ ε (cid:17) = λ (cid:16) a ◦ ε + b ◦ ε (cid:17) ≺ λ ( a ◦ ε ) + λ ( b ◦ ε ) ≤ λ ( | a ◦ ε | ) + λ ( | b ◦ ε | ) ≺ w λ ( | a | ) + λ ( | b | ) , where the first inequality follows from Lidskii type inequality λ ( x + y ) ≺ λ ( x ) + λ ( y ) in V (whicheasily follows from a result on simple Euclidean Jordan algebras [19] or from a general result onhyperbolic polynomials [14]) and the last inequality follows from (4). Now, we put a = T ( x ), b = T ( x ) and use the main theorem to get, for any x ∈ V , λ (cid:16) | T ( x ) + T ( x ) | (cid:17) ≺ w (cid:16) η ( T ) + η ( T ) (cid:17) ∗ λ ( | x | ) . This gives the stated conclusion in ( d ).( e ) From Items ( b ) and ( d ), we see that for each index k , the function T S k ( η ( T )) is positivelyhomogeneous and subadditive, hence convex. As convex functions (with domain L ( V )), thesefunctions are continuous. Hence the k th component of η ( T ), being the difference of S k ( η ( T ))and S k − ( η ( T )), is also continuous in T . This means that η ( T ) is continuous in T .( f ) Suppose T ( x ) ≺ T ( x ) for all x ∈ V . Then, by definition λ ( T ( x )) ≺ λ ( T ( x )) for all x ∈ V .Hence, | λ ( T ( x )) | ≺ w | λ ( T ( x )) | , or equivalently, λ ( | T ( x ) | ) ≺ w λ ( | T ( x ) | ) for all x . Then, thepointwise inequality λ ( | T ( x ) | ) ≺ w | λ ( | T ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | )implies that η ( T ) ≺ w η ( T ). Remark 4.
In this remark, it is convenient to let p, r, s denote real numbers. For r ∈ [1 , ∞ ]and u ∈ R n , || u || r is the usual r -norm of u in R n . For any x ∈ V , we define the corresponding spectral norm || x || r := || λ ( x ) || r , which is (cid:2) P ni =1 | λ i ( x ) | r (cid:3) /r when 1 ≤ r < ∞ and max ≤ i ≤ n | λ i ( x ) | when r = ∞ .Given r, s ∈ [1 , ∞ ] and T ∈ L ( V ), we define the norm of T from ( V , || · || r ) to ( V , || · || s ) by || T || r → s := sup = x ∈V || T ( x ) || s || x || r . Now, suppose p, r, s ∈ [1 , ∞ ] with p = r + s . Then, the following inequality holds for any15 ∈ L ( V ) and x ∈ V : || T ( x ) || p ≤ || η ( T ) || r || x || s . (19)To see this, we follow the argument given in Theorem 5.1 of [21]. Starting from the inequality λ ( | T ( x ) | ) ≺ w η ( T ) ∗ λ ( | x | ), (19) is easily seen when p = ∞ and p = 1. For 1 < p < ∞ , we use thefact that the function φ : t t p is an increasing convex function on [0 , ∞ ) to get ([2], ExerciseII.3.2) || T ( x ) || p ≤ || η ( T ) ∗ λ ( | x | ) || p . An application of (classical) generalized H¨older’s inequality gives (19). Additionally, as in The-orem 5.1 of [21], we can prove the following: || T || r → s ≤ ( || η ( T ) || ∞ if r ≤ s, || η ( T ) || rsr − s if s < r. To see a special case, suppose T is positive and self-adjoint. In this case, η ( T ) = λ ( T ( e )) andso, || T || r → s ≤ ( || T ( e ) || ∞ if r ≤ s, || T ( e ) || rsr − s if s < r. In some special cases, equality holds, see e.g., Theorem 5.1 in [21].
In this section, we present some examples.
Example 1 ( Lyapunov transformation L a and the quadratic representation P a ) Recall thatfor any a ∈ V , L a is defined by L a ( x ) = a ◦ x . As mentioned in the Introduction, we have λ ( | L a ( x ) | ) ≺ w λ ( | a | ) ∗ λ ( | x | ) for all x ∈ V . (Note that this result does not come from any of ourresults above.) By our main theorem, we see that η ( L a ) ≺ w λ ( | a | ) . However, by putting x = e in the inequality λ ( | L a ( x ) | ) ≺ w η ( L a ) ∗ λ ( | x | ) we see that λ ( | a | ) ≺ w η ( L a ). Hence, η ( L a ) = λ ( | a | ).Now, P a , defined by P a ( x ) := 2 a ◦ ( a ◦ x ) − a ◦ x , is positive and self-adjoint; hence, fromCorollary 3.4, we have η ( P a ) = λ ( P a ( e )) = λ ( a ). Example 2 ( Doubly stochastic maps )Suppose T is doubly stochastic so that T is positive and T ( e ) = T ∗ ( e ) = e . In this case, fromCorollary 3.4, λ ( | T ( x ) | ) ≺ w λ ( | x | ) for all x ∈ V and η ( T ) = . As noted earlier, λ ( T ( x )) ≺ λ ( x )for all x ∈ V . These apply when T is a convex combination of algebra automorphisms [8]. Example 3 ( Doubly substochastic maps )When T is doubly substochastic, T is positive with T ( e ) ≤ e and T ∗ ( e ) ≤ e . So, η ( T ) = λ ( T ( e )) ∨ w λ ( T ∗ ( e )) ≺ w λ ( e ) = and λ ( | T ( x ) | ) ≺ w λ ( | x | ) for all x ∈ V , or equivalently, λ ( T ( x )) ≺ w λ ( x ) for all x ≥ . Example 4 ( Cone automorphisms ) 16uppose V is a simple Euclidean Jordan algebra and T ∈ Aut( V + ) (the closure of Aut( V + ) in L ( V )). We claim that η ( T ) = λ ( T ( e )) . To see this, first suppose T ∈ Aut( V + ). Because V is assumed to be simple, we can write T = P a Φ, where Φ is an algebra automorphism and P a is the quadratic representation of (some) a >
0, see [6], Page 56. So, by Remark 2, η ( T ) = λ ( T ( e )). Now the result for any T ∈ Aut( V + )follows from the continuity of η and λ . In the same setting, consider a linear map S : V → V which is a sum of a finite number of maps in Aut( V + ). Then, S is a positive map and Corollary3.4 can be applied. To illustrate these results, let V = H n . Then, any T ∈ Aut( H n + ) is of theform T ( X ) = AXA ∗ for some n × n complex square matrix A . In this setting, η ( T ) = λ ( T ( I )) = λ ( AA ∗ ) . Now consider a completely positive map S on H n , which is, by definition, a finite sum of theform S ( X ) := P Nk =1 A k XA ∗ k with A k ∈ M n for all k . Letting C := S ( I ) := P Nk =1 A k A ∗ k and D := S ∗ ( I ) := P Nk =1 A ∗ k A k , we have η ( S ) ≺ w λ ( C ) ∨ w λ ( D ) ≺ w max { λ ( C ) , λ ( D ) } . Example 5 ( Z and Lyapunov-like transformations )We say that a linear map L : V → V is a Z -transformation [12] if[ x, y ≥ h x, y i = 0 ] ⇒ h L ( x ) , y i ≤ . It is said to be
Lyapunov-like if the inequality on the right becomes an equality.Such maps appear in dynamical systems theory. If a Z -transformation L is also positive stable(meaning that all eigenvalues of L have positive real parts), then it is known that L − is apositive map on V [12]. In this case, Corollary 3.4 is applicable to L − . To see an importantspecial case, suppose A is an n × n positive stable complex matrix. Then, L A , defined on H n by L A ( X ) := AX + XA ∗ is Lyapunov-like and positive stable. Let C := L − A ( I ) and D := L − A ∗ ( I ) . Then, η ( L − A ) = λ ( C ) ∨ w λ ( D ) ≺ w max { λ ( C ) , λ ( D ) } . We note that for any X ∈ H n , L − A ( X ) has an integral representation L − ( X ) = Z ∞ e tA Xe − tA ∗ dt, with a similar representation for L − A ∗ ( X ). These will give us integral representations for C and D , but it is unclear how to represent η ( T ) either in an integral form or in a closed form. Example 6 ( L¨owner maps ) 17iven a function φ : R → R , consider the corresponding L¨owner map φ defined on V (see Section2). Motivated by the inequality (8), we ask if the absolute value function can be replaced by φ .Keeping close to the properties of the absolute value function, we say that a function φ : R → R is sublinear if1. φ ( µt ) = µφ ( t ) for all µ ≥ t ∈ R ;2. φ ( t + s ) ≤ φ ( t ) + φ ( s ) for all t, s ∈ R .It is easy to see that sublinear functions on R are of the form φ ( t ) = α t for t ≥ φ ( t ) = β t for t ≤
0, where (constants) α, β ∈ R satisfy β ≤ α . Among these, we consider ones that arenonnegative (that is, φ ( t ) ≥ t ). Examples include φ ( t ) = | t | , φ ( t ) = max { t, } , and φ ( t ) = max {− t, } . Now suppose φ is sublinear and nonnegative, and consider the corresponding L¨owner map. Thenfor any positive linear map T we have λ (cid:16) φ ( T ( x )) (cid:17) ≺ w λ ( T ( φ ( x )) ≺ w η ( T ) ∗ λ ( φ ( x )) for all x ∈ V . Here, the first inequality follows from Lemma 3.6 in [21] and the second inequality follows fromRemark 1, Item ( ii ). Example 7
For any a, b ∈ V , consider the map P a,b := L a L b + L b L a − L a ◦ b . Clearly, P a,a = P a . Now, for any a >
0, 0 ≤ t ≤
1, it has been shown in [10] that P √ a ( x ) ≺ P a t ,a − t ( x ) ≺ L a ( x ) for all x ∈ V . Hence, by the ‘isotonicity’ property of η (Item ( f ) in Theorem 4.1), we have η ( P √ a ) ≺ w η ( P a t ,a − t ) ≺ w η ( L a ) . Since η ( P √ a ) = λ ( a ) = η ( L a ) (note a > λ ( a ) ≺ w η ( P a t ,a − t ) ≺ w λ ( a ). It follows that η ( P a t ,a − t ) = λ ( a ) ( a > , ≤ t ≤ . It would be interesting to compute η ( P a,b ) for general a, b ∈ V . Example 8 ( Schur product induced maps )Consider a fixed Jordan frame { e , e , . . . , e n } in V . This induces the Peirce decomposition ([6],Theorem IV.2.1) V = P ≤ i ≤ j ≤ n V ij , and for any x ∈ V , x = P i ≤ j x ij , where x ij ∈ V ij . Now, forany given A = [ a ij ] ∈ S n , we define the Schur product A • x := P i ≤ j a ij x ij . Properties of theSchur product and the induced transformation D A : x A • x are studied in [11]. The Lyapunovtransformation L a and the quadratic representation P a are special cases. Further special cases18re described below.( i ) Suppose A ∈ S n is positive semidefinite and consider the map D A : V → V defined by D A ( x ) := A • x . Then, D A is a self-adjoint positive linear map. So, η ( D A ) = λ ( D A ( e )) = (diag A ) ↓ . In what follows, we provide an estimate for η ( D A ), where A is not necessarily positive semidefi-nite. We write A = A + − A − , where A + and A − are positive semidefinite. By Theorem 4.1, wehave η ( D A ) = η ( D A + − D A − ) ≺ w η ( D A + ) + η ( D A − )= (diag( A + )) ↓ + (diag( A − )) ↓ . On the other hand, | diag( A ) | ↓ = λ ( | D A ( e ) | ) ≺ w η ( D A ) ∗ λ ( e ) = η ( D A ) . Hence, we have | diag( A ) | ↓ ≺ w η ( D A ) ≺ w (diag( A + )) ↓ + (diag( A − )) ↓ for any symmetric matrix A .( ii ) Now suppose A = [ a ij ] , B = [ b ij ] ∈ S n with b ij = 0 for all i, j . Define the matrix C := (cid:2) a ij b ij (cid:3) ∈ S n . Then, A • x = C • ( B • x ) = D C ( B • x ) . Hence, λ ( | A • x | ) ≺ w η ( D C ) ∗ λ ( | B • x | ) ( x ∈ V ) . In particular, when C is positive semidefinite (in which case, D C is a self-adjoint, positive map), λ ( | A • x | ) ≺ w (diag C ) ↓ ∗ λ ( | B • x | ) ( x ∈ V ) . ( iii ) Suppose λ ( A • x ) ≺ λ ( B • x ) for all x ∈ V . Pointwise majorization results of this typehave been recently studied in [10]. Thanks to the isotonicity of η , the pointwise inequality λ ( A • x ) ≺ λ ( B • x ) implies that η ( D A ) ≺ w η ( D B ). Example 7 above is an illustration of this. Motivated by our results and examples, we raise some open problems.
Problem 1.
Consider a linear map T : H n → H n . This can be extended to a (complex) linearmap e T : M n → M n by e T ( X ) := T ( A ) + i T ( B ) , where A := X + X ∗ and B := X − X ∗ i are in H n . Then, we have η ( T ) coming from Theorem3.1 and η ( e T ) coming from the result of Bapat (mentioned in the Introduction). To emphasizethe algebras involved and to differentiate them, let us write η ( T, H n ) and η ( e T , M n ). Then,19 (cid:0) e T ( X ) (cid:1) ≺ w η (cid:0) e T , M n (cid:1) ∗ s ( X ) for all X ∈ M n and λ ( | T ( X ) | ) ≺ w η ( T, H n ) ∗ λ ( | X | ) for all X ∈ H n .From Theorem 3.1 we see that η ( T, H n ) ≺ w η ( e T , M n ) . We now ask if (or under what conditions) the equality holds in the above.
Problem 2.
For a matrix A ∈ M n , consider the Lyapunov transformation L A on H n . Com-puting the norms of L A and its inverse (whenever defined) relative to spectral p -norms is stillan open problem [7, 3]. A related problem could be the description of η ( L A ); see Example 1(for Hermitian A ). Problem 3.
Given A ∈ S n a fixed Jordan frame in V , consider the Schur product induced map D A (as in Example 8). Is there a description of η ( D A )? References [1] M. Baes,
Convexity and differential properties of spectral functions and spectral mappings onEuclidean Jordan algebras , Linear Algebra Appl., 422 (2007) 664-700.[2] R. Bhatia,
Matrix Analysis , Springer, New York 1997.[3] R. Bhatia,
A note on Lyapunov equation , Linear Algebra Appl., 259 (1997) 71-76.[4] R.B. Bapat,
Majorization and singular values. III , Linear Algebra Appl., 145 (1991) 59-70.[5] M.L. Eaton and M.D. Perlman,
Reflection groups, generalized Schur functions, and the ge-ometry of majorizations , Ann. Probab., 5 (1977), 829-860.[6] J. Faraut and A. Koranyi,
Analysis on Symmetric Cones , Clarendon Press, Oxford, 1994.[7] J. Feng, J. Lam, and Z. Li,
On a conjecture about the norm of Lyapunov mappings , LinearAlgebra Appl., 165 (2015) 88-103.[8] M.S. Gowda,
Positive and doubly stochastic maps, and majorization in Euclidean Jordanalgebras , Linear Algebra Appl., 528 (2017) 40-61.[9] M.S. Gowda,
Optimizing certain combinations of spectral and linear/distance functions overspectral sets , arXiv:1902.06640v2, 2019.[10] M.S. Gowda,
Some majorization inequalities induced by Schur products in Euclidean Jordanalgebras, , Linear Algebra Appl., 600 (2020) 1-21.[11] M.S. Gowda, R. Sznajder, and J. Tao,
Complementarity properties of Peirce-diagonalizablelinear transformations on Euclidean Jordan algebras , Optim. Methods Soft., 27 (2012) 719-733.[12] M.S. Gowda and J. Tao,
Z-transformations on proper and symmetric cones , MathematicalProg., Series B, 117 (2009) 195-222. 2013] M.S. Gowda and J. Tao,
The Cauchy interlacing theorem in simple Euclidean Jordan alge-bras and some consequences , Linear and Multilinear Algebra, 59 (2011) 65-86.[14] L. Gurvits,
Combinatorics hidden in hyperbolic polynomials and related topics, arXiv:math/0402088v1 [math.CO], 2004.[15] J. Jeong and M.S. Gowda,
Spectral sets and functions in Euclidean Jordan algebras , LinearAlgebra Appl., 518 (2017) 31-56.[16] J. Jeong, Y. M. Jung, and Y. Lim,
Weak majorization, doubly substochastic maps, and somerelated inequalities in Euclidean Jordan algebras , Linear Algebra Appl., 597 (2020) 133-154.[17] A. Lewis,
Group invariance and convex matrix analysis , SIAM Matrix Anal., 17 (1996)927-949.[18] Y. Lim, J. Kim, and L. Faybusovich,
Simultaneous diagonalization on simple EuclideanJordan algebras and its applications , Forum Mathematicum 15 (2003) 639-644.[19] M. Moldovan,
A Gersgorin type theorem, spectral inequalities, and simultaneous stability inEuclidean Jordan algebras , PhD Thesis, University of Maryland, Baltimore County, 2009.[20] M. Niezgoda,
G-majorization inequalities for linear maps , Linear Algebra Appl., 292 (1999)207-231.[21] J. Tao, J. Jeong, and M.S. Gowda,