Renormalization of total sets of states into generalized bases with a resolution of the identity
aa r X i v : . [ m a t h - ph ] A ug Renormalization of total sets of states into generalized bases with a resolution ofthe identity
A. Vourdas
Department of Computer Science,University of Bradford,Bradford BD7 1DP, United [email protected]
A total set of states for which we have no resolution of the identity (a ‘pre-basis’), is consideredin a finite dimensional Hilbert space. A dressing formalism renormalizes them into density matriceswhich resolve the identity, and makes them a ‘generalized basis’, which is practically useful. Thedresssing mechanism is inspired by Shapley’s methodology in cooperative game theory, and it usesM¨obius transforms. There is non-independence and redundancy in these generalized bases, which isquantified with a Shannon type of entropy. Due to this redundancy, calculations based on generalizedbases, are sensitive to physical changes and robust in the presence of noise. For example, therepresentation of an arbitrary vector in such generalized bases, is robust when noise is inserted inthe coefficients. Also in a physical system with ground state which changes abruptly at some valueof the coupling constant, the proposed methodology detects such changes, even when noise is addedto the parameters in the Hamiltonian of the system.
I. INTRODUCTION
Redundancy is important for error correction. Without redundancy in our language (quantified by Shannon[1] and later by many others) we would not be able to communicate because a minor spelling mistake wouldchange completely the meaning. The analogue of this in the context of Hilbert spaces is that calculations basedon orthonormal bases are sensitive to noise. In contrast, calculations based on total (or overcomplete) sets ofvectors that can be used as generalized bases, are much less sensitive to noise. A set Σ of vectors is called total,if there is no vector in the Hilbert space which is orthogonal to all vectors in Σ.A total set of vectors can be used as a generalized basis, only if there is a resolution of the identity in termsof them, which can be used to expand an arbitrary vector in terms of the vectors in the total set. In the presentpaper we consider a d -dimensional Hilbert space H d , and an arbitrary total set of n > d vectors (for whichin general we have no resolution of the identity). We renormalize them into a set of n mixed states (densitymarices), that resolve the identity.The renormalization formalism is analogous to the Shapley methodology in cooperative game theory[2–5]. Ina recent paper [6] we have used this methodology mainly with the set of n = d coherent states (which is aspecial case of a total set), and we only discussed briefly the application of the formalism to an arbitrary totalset. In the present paper we expand the use of the formalism with an arbitrary total set, as follows: • The formalism is presented directly in a quantum context. The analogy between the Shapley methodologyin cooperative game theory and our approach in a quantum context, has been discussed in detail in [6]and is not discussed here. We note that cooperative game theory uses scalar quantities, while quantummechanics uses matrices. • The formalism leads to n > d density matrices σ ( i ), which resolve the identity and which can be used asa generalized basis. The term ‘generalized basis’, reflects: – the fact that it consists of density matrices (i.e., vectors with probabilities attached to them) – their non-independence (the number of them is greater than the dimension of the space). Thenon-independence and redundancy in this generalized basis, is quantified with a Shannon type ofentropy which is shown to take values in the interval (log n − log d, log n ). The merit of having thisredundancy, is that it makes calculations with generalized bases, sensitive to physical changes andimmune to noise. • Coherent states are uniformly distributed in phase space, and the corresponding renormalized densitymatrices σ ( i ) (studied in [6]) have strong properties related to coherence, e.g., they are related to eachother through displacement transformations. However from a practical point of view, for large dimension d the calculation is tedious (in a d -dimensional space, there are d coherent states, which lead to d renormalized density matrices σ ( i )). The formalism discussed in this paper is general, and we can take n slightly larger than d , so that we have the merits of redundancy with fewer renormalized density matrices σ ( i ), and a simpler calculation. Of course, in this general case the renormalized density matrices σ ( i ),have weaker properties than in the case of coherent states. • The emphasis in this paper is in the applications of the formalism, as follows: – It is shown that the representation of a vector in our generalized bases is robust in the presence ofnoise, in the sense that addition of random numbers in the coefficients does not change the vectorsignificantly. – The formalism is applied to the study of the ground state (i.e., the eigenstate corresponding to thelowest eigenvalue), of physical systems. We consider a system in which the ground state changesabruptly at some value of the coupling constant. We show that our generalized bases can detect suchchanges even in the presence of noise. In large (ideally infinite) systems, such an abrupt change ofthe ground state is associated with a phase transition.The whole area of coherent states, POVMs (positive operator valued measures) and frames and wavelets (e.g.,[7–9]), are a kind of generalized bases, and calculations that use them are robust in the presence of noise, due toredundancy. An arbitrary state can be expanded in terms of coherent states or POVMs, because of a resolutionof the identity. In frames we have no exact resolution of the identity, but we have lower and upper bounds toit. In our case we start from a total set of n > d states, which we renormalize and we get n density matrices { σ ( i ) } , that resolve the identity. They can be used as a generalized basis, which is robust in the presence ofnoise.In section 2 we define various quantities and explain the notation. In section 3, we present briefly the M¨obiustransforms. We discussed them in a different context in [10, 11] and here we only give briefly the relevantformulas, together with a new proposition on the trace of these operators (proposition III.1), which is usedlater.In section 4, we show how to renormalize a total set of vectors into a generalized basis, which resolves theidentity. The starting point is a resolution of the identity that contains projectors associated with the vectorsin the total set, and M¨obius operators. Using an approach inspired by the Shapley methodology in cooperativegame theory, we assign the M¨obius operators to the projectors, and convert them into density matrices thatresolve the identity.In section 5, the redundancy in the generalized bases, is quantified with a Shannon type of entropy. In section6, we use the generalized bases, to represent a vector in H d , with n components. We then add noise to thesecomponents, and reconstruct the original vector. It is shown that the error in this reconstruction, is smaller inthe case of our generalized bases, than in the case of orthonormal bases.In section 7, we consider a physical system with two-dimensional Hilbert space and with Hamiltonian θ ( λ ),whose ground state changes abruptly at some value of the coupling constant λ . Such a system is often usedas an approximation to an infinite-dimensional system, which due to low energy, operates in the subspace ofthe lowest two states. Many of the experimentally available qubits are of this type (e.g., the superconductingqubits). We define the concept of location index L [ θ ( λ )] of θ ( λ ), with respect to a generalized basis { σ ( i ) } .We then define comonotonicity intervals of the coupling parameter λ , within which the location index L [ θ ( λ )]remains constant, and associate them with mild changes in the physical system. Crossing points from onecomonotonicity interval to another, indicate a possible drastic change in the ground state of the system. Weshow that the method works well, even when we add noise in the parameters of the Hamiltonian.We conclude in section 8, with a discussion of our results. II. PRELIMINARIES
We consider a d -dimensional Hilbert space H d , and an orthonormal basis of ‘position states’ which we denoteas | X ; α i . Here a ∈ Z ( d ) (the integers modulo d ), and the X in the notation is not a variable but it simplyindicates position states. The Fourier transform is defined as F = 1 √ d X ω ( αβ ) | X ; α ih X ; β | ; ω ( α ) = exp (cid:18) i παd (cid:19) . (1) Definition II.1.
A ‘pre-basis’ in the d -dimensional Hilbert space H d , is a set of n > d statesΣ = {| i i | i ∈ Ω } ; Ω = { , ..., n } (2)such that: • Any subset of d of these states, are linearly independent. • Σ and also any of its subsets with r ≥ d of these states, are total sets. • In general, we have no resolution of the identity in terms of these n states.We call the R = n − dd > , (3)redundancy index. For coherent states R = d −
1, and for large d this is a large redundancy. The formalism inthis paper is general, but from a practical point of view it should be used with positive but small values of R .Let H ( A ) = H ( i , ..., i r ) be the subspace of H d spanned by the states | i i , ... | i r i : H ( A ) = H ( i , ..., i r ) = span {| i i , ... | i r i} ; A = { i , ..., i r } ⊆ Ω . (4)If r < d then H ( A ) is an r -dimensional subspace of H d . If r ≥ d , then H ( A ) = H d . We call Π( A ) = Π( i , ..., i r )the projector to the subspace H ( A ). In generalΠ( i , ..., i r ) = Π( i ) + ... + Π( i r ) . (5)Only if the kets | i i , ... | i r i are orthogonal to each other, we get equality in this equation. Also, in general thereis no constant µ such that µ [Π( i ) + ... + Π( i r )] = . (6)In special cases (e.g., with the total set of n = d coherent states), we might get equality in Eq.(6).Cooperative game theory renormalizes the individual contribution of a player, by adding his contribution toaggregations of players. Similarly, we renormalize Π( i ) = | i ih i | by adding to it the contributions of the state i ,to aggregations of states described with projectors Π( A ) where i ∈ A . III. M ¨OBIUS TRANSFORMS
M¨obius transforms have been introduced by Rota[12, 13]. They are a generalization of the ‘inclusion-exclusion’principle in set theory, which gives the cardinality of the union of overlapping sets. M¨obius transforms findthe overlaps between sets, and thus avoid the double-counting. Rota generalized them to partially orderedstructures, and here we use them with projectors to Hilbert spaces.In refs[10, 11] we have discussed M¨obius transforms in a different context, and in this section we only givebriefly the relevant formulas. The M¨obius transform of the coherent projectors Π( A ), is given by: D ( B ) = X A ⊆ B ( − | A |−| B | Π( A ); A, B ⊆ Ω . (7)The inverse M¨obius transform is Π( A ) = X B ⊆ A D ( B ) . (8)Some examples are: D (1) = Π(1); D (1 ,
2) = Π(1 , − Π(1) − Π(2) D (1 , ,
3) = Π(1 , , − Π(1 , − Π(1 , − Π(2 ,
3) + Π(1) + Π(2) + Π(3) , (9)and then Π(1 ,
2) = D (1 ,
2) + D (1) + D (2)Π(1 , ,
3) = D (1 , ,
3) + D (1 ,
2) + D (1 ,
3) + D (2 ,
3) + D (1) + D (2) + D (3) . (10)We note that if the total set Σ consists of an orthonormal set of d states, then all the D ( B ) with | B | ≥
2, arezero.
Proposition III.1.
The trace of D ( B ) is given by Tr[ D ( B )] = 1; if | B | = 1Tr[ D ( B )] = 0; if 2 ≤ | B | ≤ d Tr[ D ( B )] = ( − d −| B | (cid:18) | B | − d − (cid:19) ; if | B | ≥ d + 1 . (11) Proof.
We first point out that Tr[Π( A )] = | A | ; if | A | ≤ d Tr[Π( A )] = d ; if | A | > d (12)In the sum of Eq.(7), there are (cid:18) | B | k (cid:19) sets A with the same cardinality | A | = k . Therefore if 2 ≤ | B | ≤ d weget [14] Tr[ D ( B )] = ( − −| B | | B | X k =1 ( − k (cid:18) | B | k (cid:19) k = ( − −| B | +1 | B | | B | X k =1 ( − k − (cid:18) | B | − k − (cid:19) = ( − −| B | +1 | B | | B |− X k =0 ( − k (cid:18) | B | − k (cid:19) = (1 − | B |− = 0 . (13)In the case | B | ≥ d + 1 we getTr[ D ( B )] = ( − −| B | d X k =1 ( − k (cid:18) | B | k (cid:19) k + d ( − −| B | | B |− X k = d +1 ( − k (cid:18) | B | k (cid:19) + d (14)But ( − −| B | d X k =1 ( − k (cid:18) | B | k (cid:19) k = ( − −| B | | B | d X k =1 ( − k (cid:18) | B | − k − (cid:19) = ( − d −| B | | B | (cid:18) | B | − d − (cid:19) (15)Also (use formula 0.151.4 in [14]) | B |− X k =0 ( − k (cid:18) | B | k (cid:19) = ( − | B |− ; d X k =0 ( − n (cid:18) | B | k (cid:19) = ( − d (cid:18) | B | − d (cid:19) (16)Combining these results we prove that d ( − −| B | | B |− X k = d +1 ( − k (cid:18) | B | k (cid:19) = − d − d ( − d −| B | (cid:18) | B | − d (cid:19) = − d − ( − d −| B | ( | B | − (cid:18) | B | − d − (cid:19) (17)and then prove the last relation in the proposition.M¨obius transforms are intimately related to commutators that involve the projectors, e.g.[10, 11],[Π( i ) , Π( j )] = D ( i, j )[Π( i ) − Π( j )][[Π( i ) , Π( k )] , Π( j )] = Π( j ) D ( i, j, k )[Π( i ) − Π( k )] + [Π( i ) − Π( k )] D ( i, j, k )Π( j ) . (18)Working with M¨obius operators is equivalent to taking into account the non-commutativity of the projectorsΠ( i ). IV. RENORMALIZATION OF A PRE-BASIS INTO A GENERALIZED BASIS
Definition IV.1.
A generalized basis in H d is a set of n > d density matrices { σ ( i ) } which obey the relation X σ ( i ) = λ , (19)where λ , is a constant.In this section we renormalize an arbitrary pre-basis into a generalized basis, using M¨obius transformations.If A in Eq.(8) is the total set Ω of Eq.(2), then Π(Ω) = and we get X B ⊆ Ω D ( B ) = X i ∈ Ω Π( i ) + X i,j D ( i, j ) + X i,j,k D ( i, j, k ) + ... = . (20)This is a resolution of the identity that involves not only the projectors Π( i ), but also the M¨obius operators D ( i, j ), D ( i, j, k ), etc. The D ( i, j ) = Π( i, j ) − Π( i ) − Π( j ) ‘belongs’ to both states i, j , and Eq.(18) shows thatit is related to the commutator [Π( i ) , Π( j )]. We divide this ‘joint property’ equally to all its ‘owners’: half of itto i and the other half to j . Similarly, the D ( i, j, k ) ‘belongs’ to the states labeled with i, j, k , and we allocatea third of it to each of these three states; etc. So we resolve the identity in Eq.(20) as X i ∈ Ω τ ( i ) = ; τ ( i ) = X B ⊆ Ω \{ i } D ( B ∪ { i } ) | B ∪ { i }| = Π( i ) + 12 X j D ( i, j ) + 13 X j,k D ( i, j, k ) + ... (21)In τ ( i ) the summations are over all aggregations that involve the state i . We will show that the τ ( i ) withappropriate normalization are density matrices.The following lemma expresses τ ( i ) as a sum of projectors, and will be used below to prove that the τ ( i ) arepositive semidefinite operators. It has been proved indirectly in ref[6], through analogy with similar results incooperative game theory. Below we give a direct combinatorial proof. Lemma IV.2.
Let ̟ ( i | A ) be the projectors ̟ ( i | A ) = Π( { i } ∪ A ) − Π( A ); A ⊆ Ω \ { i } . (22) The τ ( i ) can be expressed as τ ( i ) = 1 n X A ⊆ Ω \{ i } (cid:18) n − | A | (cid:19) − ̟ ( i | A ) (23) Proof.
We count the number of projectors Π( A ) with A ⊆ Ω \ { i } , in the right hand side of Eq.(21). There are (cid:18) n − − | A | k (cid:19) Mobius operators D ( B ∪ { i } ), with A ⊆ B ⊆ Ω \ { i } , and | B | = | A | + k . Each of them containsΠ( A ) with sign ( − k +1 , and also Π( A ∪ { i } ) with sign ( − k . Therefore the number of projectors Π( A ) in theright hand side of Eq.(21), is − n − −| A | X k =0 ( − k (cid:18) n − − | A | k (cid:19) | A | + k + 1 = − n (cid:18) n − | A | (cid:19) − (24)We used here the combinatorial relation N X i =1 ( − i − (cid:18) N − i − (cid:19) w + i = w !( N − w + N )! (25)The number of projectors Π( A ∪ { i } ) is also given by Eq.(24), but with a plus sign. This proves Eq.(23). Remark
IV.3 . For a given A , the projectors ̟ ( i | A ), Π( { i } ∪ A ), Π( A ) commute with each other. Measurementwith ̟ ( i | A ) will give the result ‘yes’, if the measurement Π( { i } ∪ A ) gives ‘yes’, and the measurement Π( A )gives ‘no’. Measurement with ̟ ( i | A ) = Π( { i } ∪ A ) − Π( A ) gives the probability that the state of the systembelongs to the space H ( { i } ∪ A ) but does not belong to the space H ( A ). Proposition IV.4. (1) The τ ( i ) are positive-semidefinite Hermitian matrices.(2) The σ ( i ) given by σ ( i ) = nd τ ( i ); dn X i ∈ Ω σ ( i ) = , (26) are density matrices which resolve the identity. (3) If the total set Σ consists of an orthonormal set of d states, then σ ( i ) = Π( i ) .Proof. (1) τ ( i ) is given in Eq.(23), as a sum of projectors with positive coefficients, and this proves that theyare positive semidefinite Hermitian matrices.(2) There are (cid:18) n − | A | (cid:19) projectors ̟ ( i | A ) with the same cardinality | A | of A . ThereforeTr[ τ ( i )] = 1 n X A ⊆ Ω \{ i } (cid:18) n − | A | (cid:19) − (cid:18) n − | A | (cid:19) = 1 n d − X | A | =0 dn . (27)An alternative proof will be to use Eq.(21) and proposition III.1. It is seen that the trace of τ ( i ) does notdepend on i . It follows that the σ ( i ) = nd τ ( i ), are density matrices.(3) We have explained earlier, that if the total set Σ consists of an orthonormal set of d states, then theMobius operators D ( B ) = 0 for | B | ≥
2. Therefore in this case σ ( i ) = Π( i ). Proposition IV.5.
Let { Π( i ) } be a pre-basis , { σ ( i ) } the corresponding generalized basis, and U a unitarytransformation. Then, the generalized basis corresponding to the pre-basis { Π U ( i ) = U Π( i ) U † } is { σ U ( i ) = U σ ( i ) U † } , and obeys the resolution of the identity dn X i ∈ Ω σ U ( i ) = . (28) In particular, if F is the Fourier transform, the generalized basis corresponding to the pre-basis { e Π( i ) = F Π( i ) F † } is { e σ ( i ) = F σ ( i ) F † } , and obeys the resolution of the identity dn X i ∈ Ω e σ ( i ) = . (29) Proof.
Eq.(7) shows that if D ( B ) are the M¨obius transforms of the projectors Π( A ), then the U D ( B ) U † are theM¨obius transforms of the projectors U Π( A ) U † . Then from Eq.(21), follows the statement in the proposition.Acting with U and U † on both sides of Eq.(26), we prove the resolution of the identity in Eq.(28).The Fourier transform is a special case of a unitary transformation. A. Example I In H we consider the total set of states:Σ = (cid:26) | X ; 0 i , √ | X ; 0 i + 2 i | X ; 1 i ) , √ | X ; 0 i + | X ; 1 i ) (cid:27) . (30)In this case n = 3, and D (1 ,
2) = 15 (cid:18) − i − i (cid:19) ; D (1 ,
3) = 12 (cid:18) − − − (cid:19) ; D (2 ,
3) = 110 (cid:18) − i − − i − (cid:19) D (1 , ,
3) = 110 (cid:18) − − i i − (cid:19) (31)Then 23 σ (1) = Π(1) + 12 [ D (1 ,
2) + D (1 , D (1 , ,
3) (32)and similarly for σ (2) , σ (3). ThereforeΠ(1) = (cid:18) (cid:19) → σ (1) = (cid:18) . − .
125 + 0 . i − . − . i . (cid:19) ;Π(2) = 15 (cid:18) − i i (cid:19) → σ (2) = (cid:18) . − . − . i − .
125 + 0 . i . (cid:19) Π(3) = 12 (cid:18) (cid:19) → σ (3) = (cid:18) .
450 0 .
250 + 0 . i . − . i . (cid:19) (33)The resolution of the identity is 23 [ σ (1) + σ (2) + σ (3)] = . (34)We also give the Fourier transform of this generalized basis: e Π(1) = 12 (cid:18) (cid:19) → e σ (1) = (cid:18) .
375 0 . − . i .
325 + 0 . i . (cid:19) ; e Π(2) = (cid:18) . − . . i − . − . i . (cid:19) → e σ (2) = (cid:18) . − .
275 + 0 . i − . − . i . (cid:19)e Π(3) = 12 (cid:18) (cid:19) → e σ (3) = (cid:18) . − . − . i − .
050 + 0 . i . (cid:19) . (35)The resolution of the identity in this case is23 [ e σ (1) + e σ (2) + e σ (3)] = . (36) B. Example II In H we consider the total set of states:Σ = (cid:26) | X ; 0 i , √ | X ; 0 i + 2 i | X ; 1 i ) , √ | X ; 0 i + | X ; 1 i ) , √ | X ; 0 i + 2 | X ; 1 i ) (cid:27) . (37)In comparison to the previous example, we added here the fourth vector. In this case n = 4. The D (1 , D (1 , D (2 , D (1 , , D (1 ,
4) = 15 (cid:18) − − − (cid:19) ; D (2 ,
4) = 15 (cid:18) − i − − i − (cid:19) ; D (3 ,
4) = 110 (cid:18) − − − (cid:19) D (1 , ,
4) = 15 (cid:18) − − i i − (cid:19) ; D (1 , ,
4) = 110 (cid:18) − − (cid:19) D (2 , ,
4) = 110 (cid:18) −
11 9 − i i (cid:19) ; D (1 , , ,
4) = 110 (cid:18) − i − − i (cid:19) (38)Then 12 σ (1) = Π(1) + 12 [ D (1 ,
2) + D (1 ,
3) + D (1 , D (1 , ,
3) + D (1 , ,
4) + D (1 , , D (1 , , ,
4) (39)and similarly for σ (2) , σ (3) , σ (4). ThereforeΠ(1) = (cid:18) (cid:19) → σ (1) = (cid:18) . − .
150 + 0 . i − . − . i . (cid:19) ;Π(2) = 15 (cid:18) − i i (cid:19) → σ (2) = (cid:18) . − . − . i − .
150 + 0 . i . (cid:19) Π(3) = 12 (cid:18) (cid:19) → σ (3) = (cid:18) .
516 0 .
183 + 0 . i . − . i . (cid:19) ;Π(4) = (cid:18) (cid:19) → σ (4) = (cid:18) .
316 0 .
117 + 0 . i . − . i . (cid:19) (40)The resolution of the identity is 12 [ σ (1) + σ (2) + σ (3) + σ (4)] = . (41) V. NON-INDEPENDENCE AND REDUNDANCY IN THE GENERALIZED BASESA. The coefficients s θ ( i ) of Hermitian operators with respect to a generalized basis We consider a Hermitian operator θ and the n real numbers s θ ( i ) = dn Tr[ θσ ( i )]; n X i =1 s θ ( i ) = Tr( θ ) . (42)Using the notation θ αβ = h X ; α | θ | X ; β i ; σ αβ ( i ) = h X ; α | σ ( i ) | X ; β i ; α, β ∈ Z ( d ) , (43)we get s θ ( i ) = dn X α,β θ αβ σ βα ( i ) . (44)We assume that the n values of s θ ( i ) are known, and the d values of θ αβ are unknown. Then this is a systemof n equations with d unknowns. There are three cases: • If n = d , we can calculate θ αβ (i.e., the operator θ ) from s θ ( i ). This is the case if we consider projectorsΠ( i ) associated to coherent states. We have studied this case in [6]. • If n > d , and the values of s θ ( i ) are accurate, the n equations are compatible, and the system has an exactsolution. If the values of s θ ( i ) are ‘noisy’, we can still find an ‘optimum solution’. All computer librariescan solve systems with more equations than unknowns, by minimizing the error, i.e., by minimizing theincompatibility between the equations.0 • In the case d < n < d , we cannot calculate the θ αβ . However, the information contained in [ s θ (1) , ..., s θ ( n )]might be enough for certain physical conclusions. In particular we show that change in the order of thesenumbers, might be linked to drastic physical changes in the system. It is this case, with d < n < d , thatwe study in this paper. B. The s ρ ( i ) as pseudo-probabilities for density matrices If θ is a density matrix ρ , the s ρ ( i ) are results of measurements on ρ with the Hermitian operators σ ( i ). Agiven σ ( i ), is measurements with all its eigenprojectors | E α ( i ) ih E α ( i ) | (each of which gives a ‘yes-no’ outcome),with weights its eigenvalues e α ( i ): s ρ ( i ) = dn Tr[ ρσ ( i )] = dn d X α =1 e α ( i ) h E α ( i ) | ρ | E α ( i ) i (45)Measurements with different σ ( i ) are incompatible (they do not commute), and they need to be performedon different ensembles describing the same density matrix ρ . The n outcomes of such measurements are non-independent, but obey the relations0 ≤ s ρ ( i ) ≤ dn M [ σ ρ ( i )] < dn < n X i =1 s ρ ( i ) = 1 . (46)Here M [ σ ρ ( i )] is the maximum eigenvalue of the density matrix σ ( i ). The s ρ ( i ) ≤ dn M [ σ ρ ( i )] follows fromEq.(45), if we replace all eigenvalues with the maximum eigenvalue.We call the s ρ ( i ) pseudo-probabilities, where the ‘probabilities’ indicates that they obey Eq.(46), and the‘pseudo’ indicates that they correspond to non-independent alternatives. Independent alternatives in the presentcontext, correspond to orthonormal bases. Since s ρ ( i ) < dn , the case s ρ ( i ) = 1 if i = i s ρ ( i ) = 0 otherwise , (47)is not allowed for pseudo-probabilities. This shows clearly the non-independence in the generalized bases. C. Use of Shannon entropy to quantify the non-independence and redundancy in generalized bases
An entropic quantity [15, 16] that involves n probabilities, takes values between 0 and log n . We show thatthe entropy of our n pseudo-probabilities, takes values between (log n − log d ) and log n . The lower bound isintimately related to the fact that s ρ ( i ) ≤ dn . Definition V.1.
The Shannon entropy of a density matrix ρ with respect to our generalizes bases, is given by: E n ( ρ ) = − n X i =1 s ρ ( i ) log[ s ρ ( i )] . (48) Proposition V.2.
The Shannon entropy of a density matrix ρ with respect to a generalized basis, is boundedas follows: log n − log d < E n ( ρ ) ≤ log n. (49)1 Proof. E n ( ρ ) = − n X i =1 (cid:26) dn Tr[ ρσ ( i )] (cid:27) log (cid:26) dn Tr[ ρσ ( i )] (cid:27) = − log (cid:18) dn (cid:19) n X i =1 (cid:26) dn Tr[ ρσ ( i )] (cid:27) − dn n X i =1 Tr[ ρσ ( i )] log Tr[ ρσ ( i )] (50)Taking into account Eq.(42), and the fact that 0 ≤ Tr[ ρσ ( i )] ≤
1, we get E n ( ρ ) = − log (cid:18) dn (cid:19) − dn n X i =1 Tr[ ρσ ( i )] log Tr[ ρσ ( i )] > log n − log d. (51)We note here that 0 ≤ Tr[ ρσ ( i )] ≤ M [ σ ( i )] < P Tr[ ρσ ( i )] log Tr[ ρσ ( i )] is non-zero.For the upper bound, we point out that E n ( ρ ) involves n probabilities, and therefore log n is an upperbound. Example V.3. • If ρ = d then s ρ ( i ) = 1 n ; E n (cid:18) d (cid:19) = log n. (52) • If ρ = | X ; α ih X ; α | , then s ρ ( i ) = dn σ αα ( i ) E n ( | X ; α ih X ; α | ) = (log n − log d ) − dn n X i =1 [ σ αα ( i )] log [ σ αα ( i )] (53) For ρ = | X ; 0 ih X ; 0 | and with the generalized basis in Eq.(33), we get σ (1) = 0 . σ (2) = 0 . σ (3) = 0 . E ( | X ; 0 ih X ; 0 | ) = log 3 − log 2 + 0 .
569 = 0 .
974 (54)
For ρ = | X ; 0 ih X ; 0 | and with the generalized basis in Eq.(40), we get σ (1) = 0 . σ (2) = 0 . σ (3) = 0 . σ (4) = 0 . E ( | X ; 0 ih X ; 0 | ) = log 4 − log 2 + 0 .
603 = 1 .
296 (55)
We use the base e for logarithms, and the results are in nats. We have seen above, that the upper bound log n in the set of { E n ( ρ ) } , is reached with the density matrix ρ = d . We have also seen that log n − log d is a lower bound, but it is an open question what is the infimum.We call the R = log n − log d = log( R + 1) (56)entropic redundancy index. It plays a complementary role to the redundancy index R in Eq.(3).We have shown that the Shannon entropies E n ( ρ ) take values in the interval between R and R + log d , whichhas length log d , for any n . In the standard Shannon entropy with respect to an orthonormal basis, R = 0.2 VI. REPRESENTATION OF VECTORS IN THE GENERALIZED BASIS
An arbitrary normalized vector in H d can now be expanded in terms of n > d component vectors, as | V i = n X i =1 | V ( i ) i ; | V ( i ) i = dn σ ( i ) | V i . (57)The scalar product is given by h V | U i = X i,j h V | g ( i, j ) | U i ; g ( i, j ) = d n σ ( i ) σ ( j ) X i,j g ( i, j ) = ; [ g ( i, j )] † = g ( j, i ) . (58)The ‘metric’ g ( i, j ) consists of n matrices, each of which is a d × d matrix.We express the density matrices σ ( i ) in terms of their eigenvalues (probabilities) p α ( i ) and their eigenvectors | E α ( i ) i , as: σ ( i ) = d X α =1 p α ( i ) | E α ( i ) ih E α ( i ) | ; d X α =1 p α ( i ) = 1; d X α =1 | E α ( i ) ih E α ( i ) | = dn n X i =1 d X α =1 p α ( i ) | E α ( i ) ih E α ( i ) | = . (59)Our formalism renormalizes each projector | i ih i | into the density matrix σ ( i ) which can be viewed as a set oforthonormal bases | E α ( i ) i with probabilities p α ( i ) attached to them. Example VI.1. In H we consider the vector | V i = 1 √ (cid:18) i − i (cid:19) . (60) We also consider the matrices σ (1) , σ (2) , σ (3) , in Eq.(33), and using the resolution of the identity in Eq.(36)we expand this vector as | V i = 23 [ σ (1) | V i + σ (2) | V i + σ (3) | V i ] = (cid:18) .
094 + 0 . i . − . i (cid:19) + (cid:18) − . − . i . − . i (cid:19) + (cid:18) .
223 + 0 . i . − . i (cid:19) . (61) There is redundancy and ‘duplication’ in this approach, which is precisely the merit for using it. Errors due tonoise in some of these components are compensated by the other components, and the overall error is small, asdiscussed below.
A. Robustness of the representation in the presence of noise
We add noise to the n components of the vector | V i in Eq. (57), and we get the vector: | W i = dn n X i =1 (1 + N i ) σ ( i ) | V i . (62)3Here N i are n independent real random numbers, uniformly distributed in the interval [ − µ, µ ] (in all numericalcalculations in this paper µ = 0 . ǫ = ||| W i − | V i|| = √ ǫ D + ǫ ND ǫ D = X i N i h V | g ( i, i ) | V i ǫ ND = X i = j N i N j h V | g ( i, j ) | V i . (63) ǫ D contains the diagonal terms which are positive numbers, and ǫ ND contains the non-diagonal terms whichmight be negative.For comparison, we also expand the same vector in the orthonormal basis of position states, as | V i = d X α =1 V ( α ) | X ; α i ; V ( α ) = h X ; α | V i , (64)We then add noise in these d components as follows: | W orth i = d X α =1 [1 + N α ] V ( α ) | X ; α i = | V i + d X α =1 N α V ( α ) | X ; α i . (65)Here N α are d independent real random numbers, uniformly distributed in the interval [ − µ, µ ].As a measure of the error in this case, we calculate the number ǫ orth = ||| W orth i − | V i|| = "X α N α | V ( α ) | / . (66)Here we only have diagonal terms which are positive numbers. Therefore we expect that in general the error ǫ will be smaller than the error ǫ orth . Numerical results below confirm that this is the case. Example VI.2. In H we consider the vector of Eq.(60). We used the three density matrices σ (1) , σ (2) , σ (3) , in Eq.(33) (which are renormalizations of the three vectors in Eq.(30)) as a generalized basis. Using threeindependent random numbers, we calculated the errors in Eq.(63), and we called them ǫ D , ǫ ND and ǫ . Werepeated the calculation five times (with different sets of random numbers) and found the errors given in table I.We also used the four density matrices σ (1) , σ (2) , σ (3) , σ (4) , in Eq.(40), (which are renormalizations ofthe four vectors in Eq.(37)) as a generalized basis. Using Eq.(62) with four independent random numbers, wecalculated the errors of Eq.(63), and we called them ǫ D , ǫ ND and ǫ . Results in this case are also given intable I.Furthermore, we used the orthonormal basis in Eq.(64), and added noise in the two components as in Eq.(65),using two independent random numbers. We then calculated the error ǫ orth of Eq.(66), and give the results intable I.The results show that the generalized bases of the density matrices σ ( i ) , lead to smaller error than the orthonor-mal bases. In some cases, the non-diagonal parts of the error ǫ ND , ǫ ND , are negative, and this contributes tothe reduction of the error. VII. USE OF GENERALIZED BASES TO DETECT PHYSICAL CHANGES IN THE PRESENCEOF NOISEA. Location indices of a Hermitian operator
Definition VII.1.
Let θ ( λ ) be a Hermitian operator, e.g. a Hamiltonian that depends on a coupling parameter λ . Also let s θ ( i | λ ) be the n coefficients defined in Eq(42) (which are here functions of λ ). We order the s θ (1 | λ ) , ..., s θ ( n | λ ) as s θ ( i | λ ) ≥ s θ ( i | λ ) ≥ ... ≥ s θ ( i n | λ ) . (67)The location index of θ ( λ ), with respect to { σ ( i ) } , is the n -tuple L [ θ ( λ )] = ( i , ..., i n ) ∈ T . (68)Here T is the set of the n ! permutations of the n labels i , of s θ ( i | λ ).The L [ θ ( λ )] indicates the position of θ ( λ ) with respect to the generalized basis of { σ ( i ) } . θ ( λ ) is more closeto σ ( i ) (because s θ ( i | λ ) is the largest), less close to σ ( i ), even less close to σ ( i ), etc.In ref[6], we used this concept with projectors Π( i ) related to coherent states which are linked to the familiarconcept of phase space, and then the L [ θ ( λ )] (with n = d ) locates the operator θ ( λ ) in phase space. Here thephysical interpretation of L [ θ ( λ )] is more abstract, because the Π( i ) are arbitrarily chosen. Nevertheless, the L [ θ ( λ )] describes the position of θ ( λ ) with respect to { σ ( i ) } , which resolve the identity.Operators θ ( λ ) for which the n values s θ ( i | λ ) (with i = 1 , ..., n ) are different from each other (i.e., there is noequality in Eq.(67)) are described by only one permutation ( i , ..., i n ). This motivates the following definition. Definition VII.2.
For a given set Θ = { θ ( λ ) | λ ∈ [ a, b ] } , its subset e Θ = { θ ( λ ) | λ ∈ I ⊆ [ a, b ] } contains all θ ( λ ) for which the n values s θ ( i | λ ) (with fixed λ and i = 1 , ..., n ) are different from each other. The interval I excludes all values of λ for which there are some equalities in Eq.(67). Proposition VII.3.
Within the set e Θ , we say that θ ( λ ) and θ ( λ ) are comonotonic, and denote it as θ ( λ ) ∼ θ ( λ ) , if L [ θ ( λ )] = L [ θ ( λ )] . Then ∼ is an equivalence relation and e Θ is partitioned into equivalence classes,each of which contains operators which are comonotonic to each other.Proof. The proofs of reflexibity ( θ ( λ ) ∼ θ ( λ )), and symmetry (if θ ( λ ) ∼ θ ( λ ) then θ ( λ ) ∼ θ ( λ )), are trivial.Transitivity holds within e Θ. Indeed if L [ θ ( λ )] = L [ θ ( λ )] and L [ θ ( λ )] = L [ θ ( λ )] then L [ θ ( λ )] = L [ θ ( λ )]. Itis important for the proof that only one permutation corresponds to a given θ ( λ ). For this reason, transitivitydoes not hold within Θ, in general. Definition VII.4.
If all θ ( λ ) in the set { θ ( λ ) | λ ∈ ( c , c ) } are comonotonic to each other, the R = ( c , c ) iscalled comonotonicity interval (with respect to the operators θ ( λ )). The points in the set [ a, b ] \ I are crossingpoints from one comonotonicity region to another.In this paper we show with examples, that comonotonic operators are physically similar operators. As λ varies within a comonotonicity interval, we get mild physical changes in the system. The crossing points fromone comonotonicity interval to another, might be related with drastic physical changes in the system. In theexample below, this involves abrupt change in the ground state of the system. B. Ground state of a physical system
In the Hilbert space H we consider a system with Hamiltonian which is described with the matrix θ ( λ ) = (cid:18) (cid:19) + λ (cid:18) i − i (cid:19) . (69)5This two-dimensional system is used many times as an approximation to an infinite-dimensional system, wheredue to low energy the system is practically in the subspace of the lowest two states. Many of the experimentallyavailable qubits are of this type (e.g., the superconducting qubits).We will study changes to the ground state of the system as the coupling parameter λ varies, from negative topositive values. We will show that at λ = 0 the ground state of the system changes abruptly from one vector,to another one which is orthogonal to it.A method is practically useful if it is robust in the presence of noise. If we add to the ‘real’ values of theparameters a small amount of noise (due to experimental and other errors), the results should not change much.In order to study this we consider the ‘noisy Hamiltonian’ φ ( λ ) = (cid:18) N
00 1 + N (cid:19) + λ (cid:18) i − i (cid:19) . (70)For simplicity, we add noise only to the ‘free part’ of the Hamiltonian, with the independent random numbers N , N , which are uniformly distributed in the interval [ − µ, µ ]. φ ( λ ) is an approximation to the ‘real Hamilto-nian’ θ ( λ ). We will show that the ground state of φ ( λ ) changes rapidly but smoothly, within a small region of λ , around λ = 0 with width | N − N | . The abrupt change of the ground state of θ ( λ ) at λ = 0, becomes arapid but smooth change of the ground state of φ ( λ ), within a small region around λ = 0.Our method based on generalized bases is complementary to the calculation of eigenvalues and eigenvectors,and is robust in the presence of noise, because of the redundancy which is inherent in it. For the noiselessHamiltonian θ ( λ ), there are two comonotonicity regions ( −∞ ,
0) and (0 , ∞ ), and the point λ = 0 is a crossingpoint from the first comonotonicity region to the second one. For the noisy Hamiltonian φ ( λ ), there are morecrossing points near λ = 0, which indicate that drastic physical changes occur in that region. There are nocrossing points far from λ = 0, and this reflects the fact that only mild physical changes occur there.
1. Noiseless Hamiltonians at zero temperature in a generalized basis
The eigenvalues (energy levels) and eigenvectors of the ‘noiseless Hamiltonian’ θ ( λ ), are e ( λ ) = 1 + λ √ | e i = 12 (cid:18) (1 + i ) √ (cid:19) e ( λ ) = 1 − λ √ | e i = 12 (cid:18) − (1 + i ) √ (cid:19) ; h e | e i = 0 . (71)For λ <
0, the | e i is the ground state of the system, while for λ >
0, the | e i is the ground state of the system.At λ = 0 the two eigenvalues become equal to each other, and the ground state changes abruptly from | e i for λ <
0, to | e i (which is orthogonal to | e i ) for λ > s θ ( i | λ ) are s θ (1 | λ ) = 23 [1 − . λ ]; s θ (2 | λ ) = 23 [1 − . λ ]; s θ (3 | λ ) = 23 [1 + 0 . λ ] s θ (1 | λ ) + s θ (2 | λ ) + s θ (3 | λ ) = 2 . (72)Therefore we have two comonotonicity regions (which we give together with the corresponding location indicesfor θ ( λ )): R = ( −∞ ,
0) ; L [ θ ( λ )] = (2 , , R = (0 , ∞ ) ; L [ θ ( λ )] = (3 , ,
2) (73)6At λ = 0 we pass from the first comonotonicity region to the second one, and this is associated with drasticphysical changes in the ground state of the system.We also use the density matrices in Eq.(40) we find that the s θ ( i | λ ) are s θ (1 | λ ) = 12 [1 − . λ ]; s θ (2 | λ ) = 12 [1 − . λ ] s θ (3 | λ ) = 12 [1 + 0 . λ ]; s θ (4 | λ ) = 12 [1 + 0 . λ ] s θ (1 | λ ) + s θ (2 | λ ) + s θ (3 | λ ) + s θ (3 | λ ) = 2 . (74)Therefore we have two comonotonicity regions: R = ( −∞ ,
0) ; L [ θ ( λ )] = (3 , , , R = (0 , ∞ ) ; L [ θ ( λ )] = (2 , , ,
3) (75)It is seen that with this generalized basis also, we arrive at the same conclusions. Two different generalizedbases lead to the same conclusion as the method of eigenvectors and eigenvalues.
2. Noiseless Hamiltonians at finite temperature in a generalized basis
Let E = exp[ − βθ ( λ )]; s E ( i ) = dn Tr[ E σ ( i )] , (76)where β is the inverse temperature. Then the partition function is Z = Tr E = n X i =1 s E ( i ) . (77)For the Hamiltonian θ ( λ ), we get E = e − β cosh( βλ √ − i √ sinh( βλ √ − − i √ sinh( βλ √
2) cosh( βλ √ ! (78)We use the density matrices in Eq.(33), and we find that the s E ( i | λ ) are s E (1 | λ ) = 23 e − β [cosh( βλ √
2) + 0 .
035 sinh( βλ √ s E (2 | λ ) = 23 e − β [cosh( βλ √
2) + 0 .
459 sinh( βλ √ s E (3 | λ ) = 23 e − β [cosh( βλ √ − .
494 sinh( βλ √ . (79)We also use the density matrices in Eq.(40), and we find that the s E ( i | λ ) are s E (1 | λ ) = 12 e − β [cosh( βλ √
2) + 0 .
118 sinh( βλ √ s E (2 | λ ) = 12 e − β [cosh( βλ √
2) + 0 .
494 sinh( βλ √ s E (3 | λ ) = 12 e − β [cosh( βλ √ − .
352 sinh( βλ √ s E (4 | λ ) = 12 e − β [cosh( βλ √ − .
260 sinh( βλ √ . (80)7In calculations that involve the partition function, we can use a generalized basis and the s E ( i | λ ), instead ofan orthonormal basis. The merit of this, is robustness of the results in the presence of noise, as we show withexamples below.We note that the partition function is Z = 2 e − β cosh( βλ √ , (81)and from this we find the average energy < e ( λ ) > = − Z ∂Z∂β = 1 − λ √ βλ √ . (82)It is seen that at low temperatures ( β → ∞ ), λ > → < e ( λ ) > ≈ − λ √ λ < → < e ( λ ) > ≈ λ √ . (83)This is consistent with the result in Eq.(71), at zero temperatures.
3. Hamiltonians with noise at zero temperature: eigenvalues approach
The eigenvalues and eigenvectors of the ‘noisy Hamiltonian’ φ ( λ ), are e A ( λ ) = 1 + S − p D + 2 λ ; S = N + N D = N − N e B ( λ ) = 1 + S + p D + 2 λ (84)It is convenient to replace the random numbers N , N , with the S, D which are also independent randomnumbers. The corresponding eigenvectors (not normalized) are given by | e A ( λ ) i = − λ | λ | (1 + i ) D | λ | + r (cid:16) D | λ | (cid:17) | e B ( λ ) i = − λ | λ | (1 + i ) D | λ | − r (cid:16) D | λ | (cid:17) ; h e A ( λ ) | e B ( λ ) i = 0 . (85)The eigenvectors depend on the sign of λ and on the value of D | λ | . The lowest eigenvalue is e A ( λ ) and thecorresponding eigenvector | e A ( λ ) i .For small values of D | λ | the ground state | e A ( λ ) i can be written as | e A ( λ ) i = − λ | λ | (1 + i ) √ D | λ | + √ (cid:16) D | λ | (cid:17) − ... (86)In this case for λ <
0, we get | e A ( λ ) i ≈ | e i , and for λ >
0, we get | e A ( λ ) i ≈ | e i . It is seen that when thenoise parameter D is much smaller than the coupling parameter, we recover the results of the noiseless case,discussed earlier.8Without loss of generality, we assume that D ≥
0. The physically interesting and practically useful case, isto assume a fixed noise parameter D , and study the ground state as λ varies within the region ( − D, D ), andin particular very close to 0. This is the limit of large values of D | λ | . We compare | e A ( −| λ | ) i with | e A ( | λ | ) i , andsee to what extend they are orthogonal as in the noiseless case. In particular we calculate the overlap r ( | λ | ) = h e A ( −| λ | ) | e A ( | λ | ) i p h e A ( −| λ | ) | e A ( −| λ | ) ih e A ( | λ | ) | e A ( | λ | ) i = − A A ; A = D | λ | + s (cid:18) D | λ | (cid:19) . (87)For fixed D and when λ is close to zero, the D | λ | is large, and the r ( | λ | ) is close to 1. It is seen that as λ changesfrom negative to positive values, the | e A ( −| λ | ) i changes quickly but smoothly to | e A ( | λ | ) i (the angle betweenthese two vectors is small and decreases gradually as | λ | goes near 0). There are no discontinuities, in the sensethat for any given value of r ( | λ | ), we can find the value of D | λ | which leads to it. Therefore in the presence ofnoise, the method of the eigenvalues and eigenvectors cannot find the abrupt change in the ground state of the‘real system’, at λ = 0. Instead, it finds rapid but smooth changes of the ground state within the small interval( − D, D ), and slow changes in the large region outside it.Above we worked with the eigenvalues e A ( λ ) , e B ( λ ) which are random numbers. An alternative approximativeapproach, will be to work with their expectation values. We assume that the average value of the randomvariables S, D is 0, and that the standard deviation of D is σ . If g ( D ) is a function of D , then its expectationvalue E [ g ( D )] is given by (e.g., Eq.(5-61) in [17]) E [ g ( D )] = g (0) + g ′′ (0) σ ... (88)If we ignore the higher moments, we get E [ p D + 2 λ ] ≈ | λ |√ σ / | λ | . (89)Therefore E [ e A ( λ )] ≈ − | λ |√ − σ / | λ | ; E [ e B ( λ )] ≈ | λ |√ σ / | λ | . (90)This approach also shows that the ground state energy (averaged over noise), is E [ e A ( λ )], and as we go fromnegative to positive values of λ , the ground state changes from | e A ( −| λ | ) i to | e A ( | λ | ) i . As we explained above(using Eq.(87)) this is a smooth but quick change of the ground state.
4. Hamiltonians with noise at zero temperature: generalized bases approach
We next use the generalized bases studied in this paper. We first use the density matrices in Eq.(33) we findthat the s θ ( i | λ ) are: s θ (1 | λ ) = 23 [1 + 0 . N + 0 . N − . λ ] = 23 [1 + N − . D − . λ ] s θ (2 | λ ) = 23 [1 + 0 . N + 0 . N − . λ ] = 23 [1 + N − . D − . λ ] s θ (3 | λ ) = 23 [1 + 0 . N + 0 . N + 0 . λ ] = 23 [1 + N − . D + 0 . λ ] s θ (1 | λ ) + s θ (2 | λ ) + s θ (3 | λ ) = 2 + N + N . (91)9We assume that D > θ ( λ )): R = ( −∞ , − D ) ; L [ θ ( λ )] = (2 , , R = ( − D, − . D ) ; L [ θ ( λ )] = (1 , , R = ( − . D, . D ) ; L [ θ ( λ )] = (1 , , R = (1 . D, ∞ ) ; L [ θ ( λ )] = (3 , ,
2) (92)There are three crossing points near λ = 0 (at − D, − , D, . D ), which indicate that drastic physicalchanges occur in that region. There are no crossing points far from λ = 0, and this indicates that onlymild physical changes occur there. We note that if we average over the random variable D , then we get twocomonotonicity regions ( −∞ ,
0) and (0 , ∞ ) as in the noiseless case.We also use the density matrices in Eq.(40), and we find that the s θ ( i | λ ) are: s θ (1 | λ ) = 12 [1 + 0 . N + 0 . N − . λ ] = 12 [1 + N − . D − . λ ] s θ (2 | λ ) = 12 [1 + 0 . N + 0 . N − . λ ] = 12 [1 + N − . D − . λ ] s θ (3 | λ ) = 12 [1 + 0 . N + 0 . N + 0 . λ ] = 12 [1 + N − . D + 0 . λ ] s θ (4 | λ ) = 12 [1 + 0 . N + 0 . N + 0 . λ ] = 12 [1 + N − . D + 0 . λ ] s θ (1 | λ ) + s θ (2 | λ ) + s θ (3 | λ ) + s θ (3 | λ ) = 2 + N + N . (93)We assume that D > R = ( −∞ , − D ) ; L [ θ ( λ )] = (2 , , , R = ( − D, − D ) ; L [ θ ( λ )] = (2 , , , R = ( − D, − . D ) ; L [ θ ( λ )] = (1 , , , R = ( − . D,
0) ; L [ θ ( λ )] = (1 , , , R = (0 , D ) ; L [ θ ( λ )] = (1 , , , R = ( D, . D ) ; L [ θ ( λ )] = (3 , , , R = (1 . D, ∞ ) ; L [ θ ( λ )] = (3 , , ,
2) (94)There are six crossing points near λ = 0 (at − D, − D − . D, , D, . D ), which indicate that drasticphysical changes occur in that region. The fact that there are no crossing points far from λ = 0, indicates thatonly mild physical changes occur there. This conclusion is the same as the conclusion derived earlier using adifferent generalized basis, and also using eigenvalues and eigenvectors. Again, if we average over the randomvariable D , then we get two comonotonicity regions ( −∞ ,
0) and (0 , ∞ ) as in the noiseless case.
5. Shannon entropy in a generalized basis
Let H ( λ ) = 1 h ( λ ) + h ( λ ) (cid:18) h ( λ ) h ( λ )[ h ( λ )] ∗ h ( λ ) (cid:19) (95)0be a positive semidefinite Hamiltonian, where the h ( λ ) , h ( λ ) are real functions of the coupling parameter λ ,and h ( λ ) is a complex function of λ . We consider the pseudo-probabilities s H ( i | λ ) = dn Tr[ H ( λ ) σ ( i )]; n X i =1 s H ( i | λ ) = 1 . (96)where { σ ( i ) } is a generalized basis, and the corresponding entropy E n ( λ ) = − n X i =1 s H ( i | λ ) log[ s H ( i | λ )] . (97)We also consider the von Neumann entropy E vN ( λ ) = − Tr[ H ( λ ) log H ( λ )] . (98) Proposition VII.5.
A necessary and sufficient condition for the eigenvalues of H ( λ ) to be equal to each other(and equal to / ), is that h ( λ ) = h ( λ ) and h ( λ ) = 0 . If there exists a value λ = λ which satisfies theseconditions, then the entropies for this Hamiltonian are: E n ( λ ) = log n ; E vN ( λ ) = log 2 . (99) Proof.
The characteristic equation of the matrix H ( λ ) is( h − µ )( h − µ ) − | h | = 0 . (100)The discriminant of this equation is ∆ = ( h − h ) + | h | . (101)The eigenvalues are equal to each other when the discriminant is equal to zero and this gives the conditions h ( λ ) = h ( λ ) and h ( λ ) = 0. If there exists a value λ = λ which satisfies these conditions, the Hamiltonianat this value is H ( λ ) = , and therefore s H ( i | λ ) = 2 n Tr (cid:20) σ ( i ) (cid:21) = 1 n . (102)From this follows that E n ( λ ) = log n . Also when the eigenvalues are equal to each other (and equal to 1 / E vN ( λ ) = log 2.We have explained earlier that when the two eigenvalues are equal to each other, the ground state changesabruptly from one state to another. In the proposition above we have shown that at this point the entropies E n ( λ ) (and also the E vN ( λ )) take their maximum values.We normalize the Hamiltonian θ ( λ ) and also the ‘noisy Hamiltonian’ φ ( λ ) in Eqs.(69),(70), so that their traceis one: θ ( λ ) = θ ( λ )Tr[ θ ( λ )] ; φ ( λ ) = φ ( λ )Tr[ φ ( λ )] (103)We calculated the pseudo-probabilities s θ ( i | λ ), s φ ( i | λ ) for values of λ close to zero so that these operators arepositive semidefinite. We then calculated the entropy E n with the generalized basis in Eq.(33), and also withthe generalized basis in Eq.(40) (we denote them E , E for the noiseless normalized Hamiltonian θ ( λ ), and E noise3 , E noise4 for the noisy normalized Hamiltonian φ ( λ ), correspondingly).1We also calculated the von Neumann entropy E vN ( λ ) and E noise vN ( λ ), for θ ( λ ) and φ ( λ ), correspondingly.There is an exact symmetry E vN ( − λ ) = E vN ( λ ) for the von Neumann entropy. For the entropy in Eq.(97),there is an approximate symmetry E n ( − λ ) ≈ E n ( λ ), for small values of λ .In table II we give the von Neumann entropy E vN / log 2, and the entropies E / log 3 and E / log 4, for variousvalues of λ . We also give the quantities E vN − E noise vN E vN ; E − E noise3 E ; E − E noise4 E . (104)It is seen that the entropies E n are more robust in the presence of noise, than the von Neumann entropy E vN ( λ ).For the amounts of noise that we used, the von Neumann entropy has error of approximately 9%, and the otherentropies have error less than 1%. We note that in the example that we considered, all quantities in Eq.(104)take positive values. This is because noise makes the eigenvalues more unequal (see Eq.(84)) and this decreasesthe entropy.We conclude that the entropies associated with our generalized bases, are more robust in the presence of noisethan the entropies associated with orthonormal bases. VIII. DISCUSSION
We introduced redundancy into the concept of basis in a d -dimensional Hilbert space. We started with atotal set of n > d vectors, and renormalized it into a a generalized basis, which consists of n density matricesthat resolve the identity. The renormalization formalism uses M¨obius operators, and is inspired by the Shapleymethodology in cooperative game theory, as discussed in [6] for the special case of n = d coherent states. Inthe present paper we use an arbitrary n in the region d < n < d . The non-independence and redundancy ina generalized basis, is quantified with a Shannon type of entropy which takes values in the interval (log n − log d, log n ).We have shown that the merit of calculations in a generalized basis, is that the results are sensitive to physicalchanges and robust in the presence of noise. These two requirements may appear to be contradictive, but theyare not, because noise affects the whole basis in an almost equal way, while physical changes affect some partsof the basis more than others. We have shown with examples, that addition of noise in the coefficients of avector in a generalized basis, does not change the vector significantly.We have also applied the formalism to the study of the ground state of a system with the Hamiltonianin Eq.(69), which is frequently used as an approximation to an infinite-dimensional system, operating in thesubspace of the lowest two states. The concepts ‘location index with respect to a generalized basis’, and‘comonotonicity intervals of the coupling parameter’, have been used to detect drastic changes in the groundstate of the system, as the coupling parameter changes. It has been shown that the method is robust in thepresence of noise.The work extends the area of coherent states, POVMs and frames and wavelets, in a new direction. It startsfrom any total set of n > d vectors, and leads to n mixed states that resolve the identity. The method hasbeen used only with finite-dimensional Hilbert spaces. However cooperative game theory, is also applied toa continuum of players (e.g. [18]), and this could be used to extend our methodology to infinite-dimensionalHilbert spaces. In this case the sums contain an infinite number of terms, and the challenge is to ensure thatthey converge.We note that the present paper is not related to work on quantum game theory, which is game theory withthe superposition principle. Here we use the mathematical methodology of Shapley in cooperative game theory,to renormalize the vectors in a total set, into density matrices that resolve the identity. [1] C. E. Shannon, Bell Syst. Tech. J., 30, 47 (1951) [2] J. von Neumann, O. Morgenstern, ‘Theory of games and economic behaviour’ (Princeton Univ. Press, Princeton,1944)[3] L.S. Shapley, Ann. Math. Studies 28, 307 (1953); (reprinted in [4])[4] A. Roth (Ed.), ‘The Shapley value: Essays in honour of Lloyd S. Shapley’ (Cambridge Univ. Press, Cambridge,1988)[5] B. Peleg, P. Sudholter, ‘Introduction to the theory of cooperative games’ (Springer, Berlin, 2003)[6] A. Vourdas, Ann. Phys. 376, 153 (2017)[7] J.R. Klauder, B-S Skagerstam (Ed.) ‘Coherent states’ ((World Sci., Singapore, 1985)[8] S.T. Ali, J-P Antoine, J-P Gazeau, ‘Coherent states, wavelets and their generalizations’ (Springer, Berlin, 2000)[9] Y. Meyer, ‘Wavelets and operators’ (Cambridge Univ. Press, Cambridge, 1992)[10] A. Vourdas, J. Phys. A49, 145002 (2016)[11] A. Vourdas, J. Geom. Phys. 101, 38 (2016)[12] G.C. Rota, Z. Wahrseheinlichkeitstheorie 2, 340 (1964)[13] M. Barnabei, A. Brini, G.C. Rota, Russian Math. Surveys, 41, 135 (1986)[14] I.S. Gradshteyn, I.M. Ryzhik, ‘Table of integrals, series and products’ (Academic, London, 1965)[15] E. Carlen, Contemp. Math., 529, 73 (2009)[16] M.B. Ruskai, J. Math. Phys., 43, 4358 (2002)[17] A. Papoulis, ‘Probability, random Variables and stochastic processes’ (Mc Graw-Hill, New York, 1965)[18] R. Aumann, L. Shapley, ‘Values of non-atomic games’ (Princeton Univ. Press, Princeton, 1974) TABLE I: The vector | V i in Eq.(60), is represented with 3,4,2 component vectors, using the generalized bases in Eqs.(33), (40) and the orthonormal basis in Eq.(64), correspondingly. Random numbers (uniformly distributed in the interval[ − . , . | V i are calculated. The correspondingerrors ǫ , ǫ , ǫ orth are shown. Their diagonal parts ( ǫ D , ǫ D ) and non-diagonal parts ( ǫ ND , ǫ ND ) are also shown. Thecalculation has been repeated five times, with different sets of random numbers. ǫ ǫ D ǫ ND ǫ ǫ D ǫ ND ǫ orth .
212 0 . − .
013 0 .
296 0 .
049 0 .
038 0 . .
245 0 . − .
015 0 .
144 0 .
021 0 0 . .
181 0 .
032 0 0 .
088 0 . − .
017 0 . .
187 0 .
026 0 .
008 0 .
204 0 .
018 0 .
022 0 . .
143 0 . − .
030 0 .
066 0 . − .
015 0 . TABLE II: Various entropies for the Hamiltonians θ ( λ ) and φ ( λ ) in Eq.(103) (the entropies in the latter case have thesuperfix ‘noise’). E vN is the von Neumann entropy, E is the entropy with respect to the generalized basis in Eq.(33) ,and E is the entropy with respect to the generalized basis in Eq.(40). λ E vN / log 2 E / log 3 E / log 4 E vN − E noise vN E vN E − E noise3 E E − E noise4 E − . .
754 0 .
977 0 .
987 0 .
098 0 .
019 0 . − . .
866 0 .
987 0 .
992 0 .
094 0 .
019 0 . − . .
941 0 .
994 0 .
996 0 .
092 0 .
018 0 . − . .
985 0 .
998 0 .
999 0 .
091 0 .
016 0 . .
091 0 .
015 0 . . .
985 0 .
998 0 .
999 0 .
091 0 .
011 0 . . .
941 0 .
994 0 .
996 0 .
092 0 .
009 0 . . .
866 0 .
987 0 .
992 0 .
094 0 .
005 0 . . .
754 0 .
977 0 .
986 0 .
098 0 0 ..