The Inverse Eigenvalue Problem for Entanglement Witnesses
TThe Inverse Eigenvalue Problem for Entanglement Witnesses
Nathaniel Johnston ∗† and Everett Patterson ∗ August 20, 2017
Abstract
We consider the inverse eigenvalue problem for entanglement witnesses, which asks for a char-acterization of their possible spectra (or equivalently, of the possible spectra resulting from positivelinear maps of matrices). We completely solve this problem in the two-qubit case and we derive alarge family of new necessary conditions on the spectra in arbitrary dimensions. We also establish anatural duality relationship with the set of absolutely separable states, and we completely characterizewitnesses (i.e., separating hyperplanes) of that set when one of the local dimensions is . In linear algebra and matrix theory, an inverse eigenvalue problem asks for a characterization of the possi-ble spectra (i.e., the ordered tuples of eigenvalues) of a given set of matrices. Perhaps the most well-knowninverse eigenvalue problem asks for the possible spectra of entrywise non-negative matrices [1]. This iscalled the non-negative inverse eigenvalue problem (NIEP), and it has been completely solved for matricesof size × and smaller, but remains unsolved for larger matrices.Several variants of the NIEP have also been investigated, where instead a characterization is askedfor of the possible spectra of symmetric non-negative matrices [2], stochastic matrices [3, 4], or doublystochastic matrices [5, 6]. Similarly, the inverse eigenvalue problem has been considered for Toeplitzmatrices [7] and tridiagonal matrices [8], among many others (see [9] and the references therein).In this paper, we consider the inverse eigenvalue problem for entanglement witnesses, which are ma-trices of interest in quantum information theory that will be defined in the next section. Equivalently, weinvestigate what spectra can result from applying a positive matrix-valued map to just part of a positivesemidefinite matrix. Such maps are of interest in operator theory (and again, we introduce the mathemati-cal details in the next section).The paper is organized as follows. In Section 2, we introduce the various mathematical tools that wewill use throughout the paper. In Section 3, we completely solve the inverse eigenvalue problem for two-qubit entanglement witnesses (i.e., the lowest-dimension non-trivial case). In Section 4, we extend ourinvestigation to qubit-qudit entanglement witnesses and obtain a large family of new necessary conditionson the spectra. In the process, we completely characterize the witnesses of the set of absolutely separablestates in these dimensions. In Section 5, we extend our results to obtain necessary conditions on the spectraof decomposable entanglement witnesses in arbitrary dimensions. Finally, we provide closing remarks andopen questions in Section 6. ∗ Department of Mathematics & Computer Science, Mount Allison University, Sackville, NB, Canada E4L 1E4 † Department of Mathematics & Statistics, University of Guelph, Guelph, ON, Canada N1G 2W1 a r X i v : . [ qu a n t - ph ] A ug Mathematical Preliminaries
From a mathematical perspective, quantum information theory (and more specifically, quantum entangle-ment theory) is largely concerned with properties of (Hermitian) positive semidefinite matrices and thetensor product. A pure quantum state | v (cid:105) ∈ C n is a unit (column) vector and a mixed quantum state ρ ∈ M n ( C ) is a (Hermitian) positive semidefinite matrix with Tr ( ρ ) = (we use Tr ( · ) to denote the traceand M n ( R ) or M n ( C ) to denote the set of n × n real or complex matrices, respectively). Whenever weuse “ket” notation like | v (cid:105) or | w (cid:105) , or lowercase Greek letters like ρ or σ , we are implicitly assuming thatthey represent pure or mixed quantum states, respectively.A mixed state ρ ∈ M m ( C ) ⊗ M n ( C ) is called separable if there exist pure states {| v j (cid:105)} ⊆ C m and {| w j (cid:105)} ⊆ C n such that ρ = ∑ j p j | v j (cid:105)(cid:104) v j | ⊗ | w j (cid:105)(cid:104) w j | , where “ ⊗ ” is the tensor (Kronecker) product, (cid:104) v | is the dual (row) vector of | v (cid:105) , so | v (cid:105)(cid:104) v | is the rank- projection onto | v (cid:105) , and p , p , . . . form a probability distribution (i.e., they are non-negative and add upto ). Equivalently, ρ is separable if and only if it can be written in the form ρ = ∑ j X j ⊗ Y j , where each X j ∈ M m ( C ) and Y j ∈ M n ( C ) is a (Hermitian) positive semidefinite matrix. If ρ is notseparable then it is called entangled .Determining whether or not a mixed state is separable is a hard problem [10, 11], so in practice numer-ous one-sided tests are used to demonstrate separability or entanglement (see [12, 13] and the referencestherein for a more thorough introduction to this problem). The most well-known such test says that if wedefine the partial transpose of a matrix A = ∑ j B j ⊗ C j ∈ M m ( C ) ⊗ M n ( C ) via A Γ : = ∑ j B j ⊗ C Tj , then ρ being separable implies that ρ Γ is positive semidefinite (so we write ρ Γ (cid:23) O ) [14]. This test followssimply from the fact that if ρ is separable then ρ = ∑ j X j ⊗ Y j with each X j , Y j (cid:23) O , so ρ Γ = ∑ j X j ⊗ Y Tj , which is still positive semidefinite since each Y Tj is positive semidefinite, and tensoring and adding positivesemidefinite matrices preserves positive semidefiniteness. If a mixed state ρ is such that ρ Γ (cid:23) O then wesay that it has positive partial transpose (PPT) , and the previous discussion shows that the set of separablestates is a subset of the set of PPT states. A straightforward generalization of the partial transpose test for entanglement is based on positive linearmatrix-valued maps. A linear map Φ : M m ( C ) → M n ( C ) is called positive if X (cid:23) O implies Φ ( X ) (cid:23) O ,and the transpose is an example of one such map. Based on positive maps, we can define block-positivematrices , which are matrices of the form W : = ( id m ⊗ Φ )( X ) for some O (cid:22) X ∈ M m ( C ) ⊗ M m ( C ) Φ : M m ( C ) → M n ( C ) . Equivalently, W is block-positive if and only if ( (cid:104) a | ⊗ (cid:104) b | ) W ( | a (cid:105) ⊗ | b (cid:105) ) ≥ for all | a (cid:105) ∈ C m and all | b (cid:105) ∈ C n .If W is block-positive but not positive semidefinite, it is called an entanglement witness , since it isthen the case that Tr ( W σ ) ≥ for all separable σ ∈ M m ( C ) ⊗ M n ( C ) , but there exists some (necessarilyentangled) mixed state ρ ∈ M m ( C ) ⊗ M n ( C ) such that Tr ( W ρ ) < . That is, W verifies or “witnesses”the fact that ρ is entangled (geometrically, W acts as a separating hyperplane that separates ρ from theconvex set of separable states). A matrix is called decomposable if it can be written in the form W = X Γ + Y , where X , Y (cid:23) O . It is straightforward to verify that every decomposable matrix is block-positive.However, the converse of this statement (i.e., every block-positive matrix is decomposable) is true if andonly if ( m , n ) = (
2, 2 ) , (
2, 3 ) , or (
3, 2 ) [15, 16]. We denote the set of block positive and decomposablematrices in M m ( C ) ⊗ M n ( C ) by BP m , n and DBP m , n , respectively.Our primary interest in this work is characterizing the possible spectra of entanglement witnesses.However, it is a bit more natural to work with the set of block-positive matrices, since it is closed andconvex (neither of which is true of the set of entanglement witnesses). However, this distinction does notmatter much, since any spectral inequality that we obtain for the block-positive matrices can be turnedinto a spectral inequality for entanglement witnesses by just adding the condition “at least one of theeigenvalues is strictly negative”. With this in mind, we are now in a position to define the two main setsthat we will be investigating throughout the rest of this paper: σ ( BP m , n ) def = (cid:8) ( µ , µ , . . . , µ mn ) : ∃ W ∈ BP m , n with eigenvalues µ , µ , . . . , µ mn (cid:9) σ ( DBP m , n ) def = (cid:8) ( µ , µ , . . . , µ mn ) : ∃ W ∈ DBP m , n with eigenvalues µ , µ , . . . , µ mn (cid:9) . In words, σ ( BP m , n ) and σ ( DBP m , n ) are the sets of possible spectra of block-positive matrices anddecomposable block-positive matrices, respectively. We emphasize that we do not require the vectors inthese sets to be sorted—if a particular vector is in one of those sets, then so is every vector obtained bypermuting its entries. However, it will sometimes be convenient to refer to the ordered eigenvalues of ablock-positive matrix, so we sometimes use the notation (cid:126) µ ↓ to refer to the vector with the same entries as (cid:126) µ , but sorted in non-increasing order (i.e., µ ↓ ≥ µ ↓ ≥ · · · ≥ µ ↓ mn ). We now summarize all known results concerning the spectrum of a (decomposable) block-positive matrix W ∈ M m ( C ) ⊗ M n ( C ) that we are aware of: • W has no more than ( m − )( n − ) negative eigenvalues [17, 18], and this number of negativeeigenvalues is attainable even if W is decomposable [19]. In particular, if m = n = then everyentanglement witness has exactly negative eigenvalue. • λ min ( W ) / λ max ( W ) ≥ − min { m , n } [20, 21]. • If W has q negative eigenvalues then [21]: λ min ( W ) λ max ( W ) ≥ − mn √ mn − q √ mn − + (cid:112) mnq − q and λ min ( W ) λ max ( W ) ≥ − (cid:24) (cid:16) m + n − (cid:113) ( m − n ) + q − (cid:17)(cid:25) . • Tr ( W ) ≥ Tr ( W ) [22] (this is a spectral condition since Tr ( W ) = ∑ j λ j ( W ) and Tr ( W ) = ∑ j λ j ( W ) ). 3 If W is decomposable then λ min ( W ) ≥ − Tr ( W ) /2 [23].There is one family of block-positive matrices whose eigenvalues are particularly simple to analyze,and those are the matrices of the form ( | v (cid:105)(cid:104) v | ) Γ , where | v (cid:105) ∈ C m ⊗ C n . The following lemma is well-known (see [24] for example), but we prove it for completeness. Note that the lemma relies on the Schmidtcoefficients of | v (cid:105) , which are the singular values of | v (cid:105) when it is thought of as an m × n matrix (this isa standard tool in quantum information theory, so the reader is directed to a textbook like [25] for furtherdetails). Lemma 1.
Suppose | v (cid:105) ∈ C m ⊗ C n and for simplicity assume that m ≤ n . If | v (cid:105) has Schmidt coefficients α ≥ α ≥ · · · ≥ α m ≥ then the matrix ( | v (cid:105)(cid:104) v | ) Γ has eigenvalues α j for ≤ j ≤ m , ± α i α j for ≤ i (cid:54) = j ≤ m , and with extra multiplicity m ( n − m ) . Proof.
By virtue of the Schimdt decomposition, we can write | v (cid:105) = m ∑ j = α j | b j (cid:105) ⊗ | c j (cid:105) , where {| b j (cid:105)} ⊆ C m and {| c j (cid:105)} ⊆ C n are orthonormal sets of vectors. Straightforward computation showsthat ( | v (cid:105)(cid:104) v | ) Γ = m ∑ i , j = α i α j | b i (cid:105)(cid:104) b j | ⊗ | c j (cid:105)(cid:104) c i | . To find the eigenvalues, we first define the following (eigen)vectors: | x j (cid:105) : = | b j (cid:105) ⊗ | c j (cid:105) for ≤ j ≤ m and | y ± i , j (cid:105) : = √ (cid:0) | b i (cid:105) ⊗ | c j (cid:105) ± | b j (cid:105) ⊗ | c i (cid:105) (cid:1) for ≤ i (cid:54) = j ≤ m . Then direct computation shows that ( | v (cid:105)(cid:104) v | ) Γ | x j (cid:105) = α j | x j (cid:105) and ( | v (cid:105)(cid:104) v | ) Γ | y ± i , j (cid:105) = ± α i α j | y ± i , j (cid:105) , which establishes the claim about the potentially non-zero eigenvalues of ( | v (cid:105)(cid:104) v | ) Γ . To see that there are m ( n − m ) extra eigenvalues (for a total of m + m ( m − ) + m ( n − m ) = mn eigenvalues), first extend {| c j (cid:105)} to an orthonormal basis {| c (cid:105) , . . . , | c m (cid:105) , | d (cid:105) , . . . , | d n − m (cid:105)} of C n . Then define the (eigen)vectors | z i , j (cid:105) : = | b i (cid:105) ⊗ | d j (cid:105) for ≤ i ≤ m and ≤ j ≤ n − m . It is then straightforward to check that ( | v (cid:105)(cid:104) v | ) Γ | z i , j (cid:105) = | z i , j (cid:105) for all i , j , which completes the proof. One nice feature of the sets σ ( BP m , n ) and σ ( DBP m , n ) is that they are closed (this is not hard to prove,but the argument is done explicitly for σ ( BP m , n ) in [26]) and they are cones : if c ∈ R is a non-negativescalar and (cid:126) µ ∈ σ ( BP m , n ) , then c (cid:126) µ ∈ σ ( BP m , n ) too (and similarly for σ ( DBP m , n ) ). Given a particularcone C ⊆ R n , its dual cone C ◦ is defined by C ◦ def = (cid:8) (cid:126) x ∈ R n : (cid:104) (cid:126) x , (cid:126) y (cid:105) ≥ for all (cid:126) y ∈ C (cid:9) .
4t is well-known that the double-dual of any closed cone
C ⊆ R n is its convex hull: C ◦◦ = Conv ( C ) [27]. For this reason, in this work it will often be much easier to work with Conv ( σ ( BP m , n )) and Conv ( σ ( DBP m , n )) instead of σ ( BP m , n ) and σ ( DBP m , n ) directly. Before proceeding, it is worth demon-strating that σ ( BP m , n ) and σ ( DBP m , n ) are indeed not convex: Example 1.
Consider the positive semidefinite matrix X = ∈ M ( C ) ⊗ M ( C ) . Then W : = X Γ = is a block-positive matrix with eigenvalues (
4, 2, 1, − ) ∈ σ ( BP ) . Thus (
4, 2, 1, − ) + (
4, 2, −
2, 1 ) = (
4, 2, − − ) ∈ Conv (cid:0) σ ( BP ) (cid:1) . But as we mentioned in Section 2.3, it is well-known that a block-positive matrix in BP can have atmost one negative eigenvalue, which shows that σ ( BP ) is not convex. Similar examples can also beconstructed in higher dimensions. One useful tool for probing closed convex cones is semidefinite programming , which is a method ofoptimizing a linear function over constraints involving positive semidefinite matrices. We do not give a fullintroduction to semidefinite programming here (see [27] for such an introduction), but rather we note that itcontains linear programming as a special case. For our purposes, given a linear map L : R m → M n ( R ) , avector (cid:126) v ∈ R m , and a symmetric matrix A ∈ M n ( R ) , the associated semidefinite program is the followingpair of optimization problems:Primal problemminimize: (cid:104) (cid:126) v , (cid:126) x (cid:105) subject to: L ( (cid:126) x ) (cid:23) A , (cid:126) ≤ (cid:126) x ∈ R m Dual problemmaximize: Tr ( AY ) subject to: L ∗ ( Y ) ≤ (cid:126) v , O (cid:22) Y ∈ M n ( R ) , (1)where L ∗ : M n ( R ) → R m is the dual map of L , defined by the fact that Tr ( L ( (cid:126) x ) Y ∗ ) = (cid:104) (cid:126) x , L ∗ ( Y ) (cid:105) for all (cid:126) x ∈ R m and Y ∈ M n ( R ) . Weak duality for semidefinite programs says that (cid:104) (cid:126) v , (cid:126) x (cid:105) ≥ Tr ( AY ) whenever (cid:126) x and Y satisfy theconstraints of the semidefinite program. Strong duality says that, under slightly stronger assumptions, wecan find a particular (cid:126) x and Y so that this inequality becomes an equality. The following theorem providesone possible set of assumptions that lead to strong duality (see [28, Lecture 7], for example): Theorem 1 (Slater conditions for strong duality) . If there exists (cid:126) x > (cid:126) such that L ( (cid:126) x ) (cid:31) A and Y (cid:31) O such that L ∗ ( Y ) < (cid:126) v , then strong duality holds for the semidefinite program (1) . That is, there exists anfeasible point (cid:126) x of the primal problem and a feasible point Y of the dual problem such that (cid:104) (cid:126) v , (cid:126) x (cid:105) = Tr ( AY ) , and this quantity is the optimal value of both optimization problems. In words, Slater’s theorem says that strong duality holds for any semidefinite program in which boththe primal problem and the dual problem are strictly feasible .5 .5 Absolute Separability and Absolute PPT The final ingredient that we need to be able to prove our results is absolute separability. A state ρ ∈ M m ( C ) ⊗ M n ( C ) is called absolutely separable [29] if U ρ U ∗ is separable for all unitary matrices U ∈ M m ( C ) ⊗ M n ( C ) . Similarly, a state is said to be absolutely PPT [24] if U ρ U ∗ has positive partialtranspose (PPT) for all unitary matrices U ∈ M m ( C ) ⊗ M n ( C ) . Since both absolute separability andabsolute PPT only depend on the spectrum of the state ρ , we define these sets via those spectra instead ofvia the states:ASEP m , n def = (cid:8) ( λ , . . . , λ mn ) : ∃ abs. sep. ρ ∈ M m ( C ) ⊗ M n ( C ) with eigenvalues λ , . . . , λ mn (cid:9) , APPT m , n def = (cid:8) ( λ , . . . , λ mn ) : ∃ abs. PPT ρ ∈ M m ( C ) ⊗ M n ( C ) with eigenvalues λ , . . . , λ mn (cid:9) . In the case when one of the local dimensions is , the sets ASEP n and APPT n have a simple char-acterization [24, 30, 31]:ASEP n = APPT n = (cid:8) ( λ , . . . , λ n ) : λ ≤ λ n − + (cid:112) λ n − λ n (cid:9) = (cid:26) ( λ , . . . , λ n ) : (cid:20) λ n λ n − − λ λ n − − λ λ n − (cid:21) (cid:23) O (cid:27) . When m , n ≥ , the set APPT m , n has been completely characterized (again, see [24]), but the characteri-zation is rather complicated, so we leave the details until we need them in Section 5. Not much is knownabout the set ASEP m , n when m , n ≥ other than the obvious fact that ASEP m , n ⊆ APPT m , n . However, itis not even known whether or not this inclusion is strict [32].The reason for our interest in absolute separability and absolute PPT in this work is the followingresult, which establishes a duality result between these problems and the inverse eigenvalue problem forblock positive matrices: Theorem 2.
The following duality relationships hold for all m , n ≥ : σ ( BP m , n ) ◦ = ASEP m , n and σ ( DBP m , n ) ◦ = APPT m , n . Proof.
This result for APPT m , n was essentially shown (though not explicitly stated in this way) in [24].With that in mind, we explicitly prove the characterization of ASEP m , n , and just note that the analogousresult for APPT m , n can be proved in a very similar manner.We start by noting that (cid:126) λ ∈ ASEP m , n if and only if Tr ( W ( U diag ( (cid:126) λ ) U ∗ )) ≥ for all W ∈ BP m , n and unitary U ∈ M m ( C ) ⊗ M n ( C ) . (2)Well, it is a well-known result (see [33, Problem III.6.14], for example) that Tr ( W ( U diag ( (cid:126) λ ) U ∗ )) ≥ for all unitary U if and only if mn ∑ j = λ j µ p ( j ) ≥ for all permutations p : [ mn ] → [ mn ] , where µ , µ , . . . , µ mn are the eigenvalues of W . (In fact, it suffices to check that ∑ mnj = λ j µ mn + − j ≥ ,but this additional simplification is not relevant for our purposes.)Since σ ( BP m , n ) is invariant under permutations of its vector’s entries, it follows that condition (2) isequivalent to (cid:126) λ · (cid:126) µ = mn ∑ j = λ j µ j ≥ for all (cid:126) µ ∈ σ ( BP m , n ) .
6n other words, we have shown that (cid:126) λ ∈ ASEP m , n if and only if (cid:126) λ ∈ σ ( BP m , n ) ◦ , so σ ( BP m , n ) ◦ = ASEP m , n , as desired.By taking the dual cone of the sets in Theorem 2 and using the fact that the double-dual of a closedcone is its convex hull, we immediately obtain the following corollary: Corollary 1.
The following duality relationships hold for all m , n ≥ : ASEP ◦ m , n = Conv (cid:0) σ ( BP m , n ) (cid:1) and APPT ◦ m , n = Conv (cid:0) σ ( DBP m , n ) (cid:1) . It is worth noting that these results show that the sets
Conv (cid:0) σ ( BP m , n ) (cid:1) and Conv (cid:0) σ ( DBP m , n ) (cid:1) arethe sets of spectra of what might be called “absolute separability witnesses” and “absolute PPT witnesses”,respectively. These types of witnesses were considered in [26]. With the preliminaries out of the way, we now consider the simplest non-trivial entanglement witnessesthat exist, which are those that live in M ( C ) ⊗ M ( C ) (a two-dimensional piece of quantum informa-tion is called a “qubit”, so these entanglement witnesses are sometimes called “two-qubit” entanglementwitnesses). Recall that every block-positive matrix W ∈ M ( C ) ⊗ M n ( C ) (when n = or n = ) isdecomposable and thus can be written in the form W = X Γ + Y , where X , Y ∈ M ( C ) ⊗ M n ( C ) areboth positive semidefinite. Before proceeding, we first need the following slight strengthening of this factin the n = case: Lemma 2.
A matrix W ∈ M ( C ) ⊗ M ( C ) is block positive if and only if there exist positive semidefinite X , Y ∈ M ( C ) ⊗ M ( C ) such that W = X Γ + Y . Furthermore, X can be chosen to have rank .Proof. As we already noted, everything except for the “furthermore” remark is already known, so we justneed to show that we can choose X to have rank . To this end, suppose without loss of generality that W is scaled so that Tr ( X ) = (and thus X is a mixed state). Then we use the fact from [34, Section IV] thatwe can write X in the form X = ( U ⊗ V ) (cid:32) I + ∑ k = d k σ k ⊗ σ k (cid:33) ( U ⊗ V ) ∗ , where U , V ∈ M ( C ) are unitary, d , d , d are real numbers, and σ , σ , σ are the Pauli matrices σ : = (cid:20) (cid:21) , σ : = (cid:20) − ii (cid:21) , and σ : = (cid:20) − (cid:21) . Since conjugation by U ⊗ V has no effect on the properties we are investigating (positive semidefiniteness,being an entanglement witness, having positive partial transpose, and so on), from now on we assumewithout loss of generality that X has the form X = (cid:32) I + ∑ k = d k σ k ⊗ σ k (cid:33) . As noted in [34], X being a quantum state is equivalent to the statement that ( d , d , d ) is in the convexhull of the four points (
1, 1, − ) , ( −
1, 1 ) , ( −
1, 1, 1 ) , and ( − − − ) . It is straightforward to check7hat the four matrices corresponding to these points have rank . For example, if ( d , d , d ) = ( −
1, 1 ) then direct computation shows that (cid:0) I + σ ⊗ σ − σ ⊗ σ + σ ⊗ σ (cid:1) = | ψ + (cid:105)(cid:104) ψ + | , where | ψ + (cid:105) is the pure state | ψ + (cid:105) : = ( | (cid:105) ⊗ | (cid:105) + | (cid:105) ⊗ | (cid:105) ) / √ .Well, this convex hull is a tetrahedron in R , and similarly the set of ( d , d , d ) corresponding to (notnecessarily positive semidefinite) matrices with positive partial transpose is another tetrahedron. Theirintersection is an octahedron that corresponds to the set of PPT states. These tetrahedra and octahedronare depicted in Figure 1.Figure 1: The region of tuples ( d , d , d ) ∈ R corresponding to the set of quantum states (blue tetra-hedron), partial transposes of quantum states (yellow tetrahedron), and PPT states (octahedron where thetwo tetrahedra intersect). Every state in the blue tetrahedron can be written as a convex combination of therank- state at the nearest vertex of the tetrahedron and some PPT state in the octahedron.The result now follows almost immediately from this geometric picture. Since X is in the (blue)tetrahedron of mixed states, it can be written in the form t | v (cid:105)(cid:104) v | + ( − t ) σ , where | v (cid:105)(cid:104) v | is the rank- state at the closest of the four corners of the tetrahedron, and σ is a PPT state ( σ can be chosen to be theclosest point in the central octahedron). Thus W = X Γ + Y = ( t | v (cid:105)(cid:104) v | + ( − t ) σ ) Γ + Y = ( t | v (cid:105)(cid:104) v | ) Γ + (( − t ) σ Γ + Y ) . Since σ is PPT, σ Γ is positive semidefinite, so this is a decomposition of W has the desired form.It is worth noting that Lemma 2 is not true in M ( C ) ⊗ M ( C ) . As mentioned ealier, we can indeedalways write a block positive matrix W ∈ M ( C ) ⊗ M ( C ) in the form W = X Γ + Y , where X , Y ∈ M ( C ) ⊗ M ( C ) are positive semidefinite. However, the following example shows that we cannot ingeneral choose X to have rank . 8 xample 2. Consider the matrix W = Γ ∈ M ( C ) ⊗ M ( C ) . Since W is the partial transpose of a positive semidefinite matrix, it is block positive. However, directcalculation shows that its eigenvalues are and ( ± √ ) /2 , each with multiplicity . Since two ofthese eigenvalues are negative, it cannot possibly be written in the form W = X Γ + Y with X , Y positivesemidefinite and X having rank , since Lemma 1 implies that any such W has at most one negativeeigenvalue. We are now able to state the main result of this section, which provides a complete characterization ofthe eigenvalues of two-qubit block-positive matrices/entanglement witnesses.
Theorem 3.
There exists a block-positive matrix in M ( C ) ⊗ M ( C ) with eigenvalues µ , µ , µ , µ ∈ R if and only if the following three inequalities hold:(a) µ ↓ ≥ ,(b) µ ↓ ≥ − µ ↓ , and(c) µ ↓ ≥ − (cid:113) µ ↓ µ ↓ .Proof. We start by proving that the three listed eigenvalue inequalities are necessary.As we mentioned in Section 2.3, inequality (a) is already known. To see that inequalities (b) and (c) arenecessary, we start by noting that they both hold for block-positive matrices of the form W = ( | v (cid:105)(cid:104) v | ) Γ ,since Lemma 1 says that the if α , α are the Schmidt coefficients of | v (cid:105) then the eigenvalues of W are µ ↓ = α , µ ↓ = α α , µ ↓ = α , and µ ↓ = − α α , and it is straightforward to check that these quantitiesalways satisfy inequalities (b) and (c).Well, to see that inequalities (b) and (c) are necessary for all block-positive matrices, we recall fromLemma 2 that we can write every block-positive matrix W ∈ M ( C ) ⊗ M ( C ) in the form W = t ( | v (cid:105)(cid:104) v | ) Γ + Y , where t ≥ and Y is positive semidefinite. Well, we just showed that the inequalities µ ↓ + µ ↓ ≥ and µ ↓ + (cid:113) µ ↓ µ ↓ ≥ are satisfied by the eigenvalues of ( | v (cid:105)(cid:104) v | ) Γ . Since adding a positive semidefinitematrix Y can only increase the eigenvalues, the eigenvalues of W must also satisfy these same inequalities(i.e., (b) and (c)). This completes the proof of necessity.To see that the inequalities are sufficient, we demonstrate a procedure for explicitly constructing ablock-positive matrix with any set of eigenvalues satisfying the inequalities (a), (b), and (c). If µ ↓ ≥ then all eigenvalues are non-negative so we can find a positive semidefinite W that works, and if µ ↓ = then inequality (c) implies µ ↓ = , so we can again find a positive semidefinite W . Thus we assume fromnow on that µ ↓ > > µ ↓ . Define α : = (cid:113) µ ↓ and α : = − µ ↓ / α , as well as γ : = µ ↓ − ( µ ↓ ) / µ ↓ and γ = µ ↓ + µ ↓ , W = α α α α α α Γ + γ + γ is block-positive and has eigenvalues µ ↓ ≥ µ ↓ ≥ µ ↓ ≥ µ ↓ (the corresponding eigenvectors are | (cid:105) ⊗ | (cid:105) , | (cid:105) ⊗ | (cid:105) + | (cid:105) ⊗ | (cid:105) , | (cid:105) ⊗ | (cid:105) , and | (cid:105) ⊗ | (cid:105) − | (cid:105) ⊗ | (cid:105) , respectively).In order to visualize the set of possible spectra of two-qubit block-positive matrices W , note that(since Tr ( W ) ≥ and Tr ( W ) = if and only if W = O ), we can always scale W to have Tr ( W ) = .In other words, we can choose µ = − µ − µ − µ and then plot the tuples ( µ , µ , µ ) ∈ R , where µ , µ , µ ≥ are the three non-negative eigenvalues of W . This region is displayed in Figure 2—the factthat it is not convex should not be surprising, given Example 1.Figure 2: The region of tuples ( µ , µ , µ ) ∈ R with the property that ( µ , µ , µ , µ ) is the spectrum ofa block-positive matrix, where µ , µ , µ ≥ and µ = − µ − µ − µ (i.e., the block-positive matrixis normalized to have trace ). We now consider entanglement witnesses in the case where just one of the local dimensions is . In thiscase, we are only able to get a new family of necessary conditions that the spectra must satisfy—notnecessary and sufficient conditions like in the previous section. The main result of this section providesa complete characterization of Conv (cid:0) σ ( BP n ) (cid:1) and Conv (cid:0) σ ( DBP n ) (cid:1) , which in turn imply necessaryconditions on σ ( BP n ) : Theorem 4.
Suppose (cid:126) µ ∈ R n . Define s k : = n ∑ j = k µ ↓ j for k =
1, 2, 3 and s − : = (cid:32) n ∑ j = µ j − n ∑ j = | µ j | (cid:33) = ∑ j : µ j < µ j . hen the following are equivalent:a) (cid:126) µ ∈ Conv (cid:0) σ ( BP n ) (cid:1) .b) (cid:126) µ ∈ Conv (cid:0) σ ( DBP n ) (cid:1) .c) There exists a positive semidefinite matrix X ∈ M ( R ) such that x + x ≤ s , x ≤ s , x + x ≤ s , and x ≤ s − . d) If we define q : = s − s − and q : = ( s + s ) − s then the following inequalities all hold: q , q ≥ √ q ≥ s − s √ q ≥ s − s + s √ q + √ q ≥ s − s . Proof.
The equivalence of (a) and (b) follows immediately from Corollary 1 and the fact that ASEP n = APPT n [30].To see that (b) and (c) are equivalent, we use Corollary 1 to note that (cid:126) µ ∈ Conv (cid:0) σ ( DBP n ) (cid:1) if andonly if (cid:126) µ ∈ APPT ◦ n , if and only if the optimal value of the following (primal) semidefinite program isnon-negative: minimize: n ∑ j = µ ↓ j λ n + − j subject to: (cid:20) λ n λ n − − λ λ n − − λ λ n − (cid:21) (cid:23) O n ∑ j = λ j = λ ≥ λ ≥ · · · ≥ λ n ≥ (3)(Note that the constraint ∑ nj = λ j = does not actually affect whether or not the optimal value is non-negative, but it is easier to demonstrate that strong duality holds with it present). Well, it will be convenientto instead work with the dual of the above SDP, which (after some routine calculation and simplification)has the following form:maximize: d subject to: − b + z n − + d ≤ µ ↓ n − z k + z k − + d ≤ µ ↓ k for k =
4, 5, . . . , 2 n − c − z + z + d ≤ µ ↓ b − z + z + d ≤ µ ↓ a − z + d ≤ µ ↓ d , z , . . . , z n − ≥ (cid:20) a bb c (cid:21) (cid:23) O . (4)11o see that strong duality holds for this pair of SDPs, note that in the primal problem we can choosea large scalar c and a small scalar ε > , then we can construct a strictly feasible point of the primalproblem (3) by letting λ j = ( c − j ε ) / ( nc ) for all j . Similarly, to construct a strictly feasible point of thedual problem (4), we choose b = , a = c = z i = ε for all i , and d very large and negative. Thus both theprimal and dual problems are strictly feasible, so Slater’s theorem (Theorem 1) tells us that strong dualityholds, and in particular these SDPs have the same optimal value.Thus the primal SDP (3) has non-negative optimal value (i.e., (cid:126) µ ∈ Conv (cid:0) σ ( DBP n ) (cid:1) ) if and only ifthere is a feasible point of the dual SDP (4) with d = . To simplify this dual existence problem into theform described by condition (c) of the theorem, we note that we can eliminate the variables z , . . . , z n − in a straightforward way since all of the constraints involving them are linear. For example, to eliminate z n − we note that the only constraints that involve z n − are max (cid:8) z n − − µ ↓ n − (cid:9) ≤ z n − ≤ µ ↓ n + b . We can thus “squeeze out” z n − by noting that it exists if and only if max (cid:8) z n − − µ ↓ n − (cid:9) ≤ µ ↓ n + b (i.e., if and only if ≤ µ ↓ n + b and z n − − µ ↓ n − ≤ µ ↓ n + b ). After simplifying a bit, we see thatwe are now asking whether or not there exist a , b , c , z , . . . , z n − such that the following constraints aresatisfied: − b ≤ µ ↓ n − b + z n − ≤ µ ↓ n − + µ ↓ n − z k + z k − ≤ µ ↓ k for k =
4, 5, . . . , 2 n − c − z + z ≤ µ ↓ b − z + z ≤ µ ↓ a − z ≤ µ ↓ z , . . . , z n − ≥ (cid:20) a bb c (cid:21) (cid:23) O . (5)By repeating this same argument for k = n −
2, 2 n −
3, . . . , 5, 4 , we similarly see that the pair ofconstraints − b + z k ≤ n ∑ j = k + µ ↓ j and − z k + z k − ≤ µ ↓ k are equivalent to the pair of constraints (no longer depending on z k ) − b + z k − ≤ n ∑ j = k µ ↓ j and − b ≤ n ∑ j = k + µ ↓ j . Thus we see that the system of inequalities (5) is equivalent to the following system that only asks for the12xistence of a , b , c , z , z , z such that − b ≤ n ∑ j = k µ ↓ j for k =
5, . . . , 2 n − b + z ≤ n ∑ j = µ ↓ j c − z + z ≤ µ ↓ b − z + z ≤ µ ↓ a − z ≤ µ ↓ z , z , z ≥ (cid:20) a bb c (cid:21) (cid:23) O . Similarly eliminating z via max (cid:8)
0, 2 c + z − µ ↓ (cid:9) ≤ z ≤ b + n ∑ j = µ ↓ j gives − b ≤ n ∑ j = k µ ↓ j for k =
4, . . . , 2 n c − b + z ≤ n ∑ j = µ ↓ j b − z + z ≤ µ ↓ a − z ≤ µ ↓ z , z ≥ (cid:20) a bb c (cid:21) (cid:23) O . Then eliminating z via max (cid:8)
0, 2 b + z − µ ↓ (cid:9) ≤ z ≤ b − c + n ∑ j = µ ↓ j − b ≤ n ∑ j = k µ ↓ j for k =
4, . . . , 2 n c − b ≤ n ∑ j = µ ↓ j c + z ≤ n ∑ j = µ ↓ j a − z ≤ µ ↓ z ≥ (cid:20) a bb c (cid:21) (cid:23) O . Finally, eliminating z via max (cid:8)
0, 2 a − µ ↓ (cid:9) ≤ z ≤ − c + n ∑ j = µ ↓ j gives − b ≤ n ∑ j = k µ ↓ j for k =
4, . . . , 2 n c − b ≤ n ∑ j = µ ↓ j c ≤ n ∑ j = µ ↓ j a + c ≤ n ∑ j = µ ↓ j (cid:20) a bb c (cid:21) (cid:23) O . (6)To simplify this system of inequality even further, we we recall that s k = ∑ nj = k µ ↓ j , which are exactlythe right-hand-sides of the inequalities. Also, we can add the constraint − b ≤ s without affectinganything, since it is implied by the existing constraint c − b ≤ s . Then we can replace the set ofconstraints − b ≤ s k for k =
3, 4, . . . , 2 n by the single constraint − b ≤ s − , since s − = min k { s k } , andwe know that this minimum is not attained when k = or k = since the third and fourth inequalitiesin (6) tell us that s ≥ s ≥ . After making these simplifications, the system of inequalities has the form − b ≤ s − c − b ≤ s c ≤ s a + c ≤ s (cid:20) a bb c (cid:21) (cid:23) O . (7)14o finally get this system of inequalities into the form described by part (c) of the theorem, we define X = (cid:20) a − b − b c (cid:21) and note that X is positive semidefinite if and only if (cid:20) a bb c (cid:21) is positive semidefinite.This completes the proof of the equivalence of (b) and (c).We now show that conditions (c) and (d) of the theorem are equivalent. The proof of this fact issimilarly in flavor to the proof of equivalence of (b) and (c)—we just use quantifier elimination techniquesto eliminate the variables a , b , and c from the system of inequalities (7). First, note that the positivesemidefinite constraint is equivalent to a , c , ac − b ≥ . However, note that if either a = or c = then b = , in which case the system is simply equivalent to the requirement that µ j ≥ for all j , whichimplies that the inequalities in condition (d) hold. Thus we assume from now on that a , c (cid:54) = .Similarly, notice that if b ≤ then the first inequality in (7) implies µ j ≥ for all j , which satisfiescondition (d) of the theorem. Thus we assume without loss of generality that b > . Well, if that systemof inequalities is true for a particular a > then certainly it is true if we increase a even further (subjectto the contraint a + c ≤ s ). Thus we may in fact assume that a = s /2 − c . After making thesesimplifications, the system of inequalities has the form − b ≤ s − c − b ≤ s c ≤ s b ≤ c ( s /2 − c ) . (8)Again, we might as well increase b up to (cid:112) c ( s /2 − c ) , so we assume this from now on. This givesus the equivalent system − (cid:113) c ( s − c ) ≤ s − c − (cid:113) c ( s − c ) ≤ s c ≤ s . (9)Well, the first inequality above is equivalent to c ( s − c ) ≥ s − , which is a quadratic equation in c that is equivalent to the quadratic having two real roots, and c beingbetween those roots: − s − ≤ s s − (cid:113) s − s − ≤ c ≤ s + (cid:113) s − s − The second inequality in the system (9) is a bit more inconvenient to deal with, since s − c might bepositive or negative, so we have to be more careful when squaring the inequality. Specifically, we find thatit is equivalent to c ≤ s or c ( s − c ) ≥ ( c − s ) . By expanding the quadratic and solving for c , we see that it is equivalent to ( s + s ) ≥ s and ( s + s ) − (cid:113) ( s + s ) − s ≤ c ≤ ( s + s ) + (cid:113) ( s + s ) − s . at least one of the following two systems of inequalitiesholding: − s − ≤ s s − (cid:113) s − s − ≤ c ≤ s + (cid:113) s − s − c ≤ s c ≤ s (10)or − s − ≤ s s − (cid:113) s − s − ≤ c ≤ s + (cid:113) s − s − c ≤ s s ≤ ( s + s ) ( s + s ) − (cid:113) ( s + s ) − s ≤ c ≤ ( s + s ) + (cid:113) ( s + s ) − s . (11)Well, to eliminate c from the system of inequalities (10), we note that c ≤ s implies s ≥ , so µ ↓ ≥ , so s ≥ s , which implies that we can discard the c ≤ s inequality, as it is redundant. Thenwhen we “squeeze out” c , we are left with the equivalent system − s − ≤ s s − (cid:113) s − s − ≤ s . (12)On the other hand, if we “squeeze out” c from the system of inequalities (11), we are left with themuch uglier system (where we recall that q : = s − s − and q : = ( s + s ) − s ): q , q ≥ √ q ≥ s − s √ q ≥ s − s + s √ q + √ q ≥ | s − s | . (13)Thus condition (c) of the theorem is equivalent to at least one of the systems of inequalities (12)and (13) holding. To remove the absolute value bars from system (13), we first note that if s < then s − s < , so the inequality √ q + √ q ≥ s − s is trivial. On the other hand, if s ≥ thenwe observe that s ≥ s ≥ , which implies µ ↓ ≥ , so s ≥ s . Thus q = ( s + s ) − s = s + s ( s − s ) ≥ s . It follows that √ q + √ q ≥ √ q ≥ s ≥ s + ( s − s ) = s − s , as desired.To complete the proof, we need to show that the system of inequalities (12) holding implies that thesystem (13) holds, so we can completely discard the system (12). Well, if − s − ≤ s then q ≥ . Thesecond inequality in (12) says exactly that √ q ≥ s − s , and since s ≥ s this implies both √ q ≥ s − s and √ q + √ q ≥ s − s . All that remains is to show that q ≥ and √ q ≥ s − s + s .Well, to see that q ≥ , we note that s − (cid:113) s − s − ≤ s implies s ≥ , in which case we have q = s + s ( s − s ) ≥ s ≥ . Thus we also conclude that √ q ≥ s ≥ s − s ≥ s − s − ( s − s ) = s − s + s , which completes the proof that the system (12) implies the system (13). This finallygets us to the form described in part (d) of the theorem, and completes the proof.16otice in particular that conditions (c) and (d) provide necessary conditions that the spectra of en-tanglement witnesses in BP n must satisfy. However, because σ ( BP n ) is not convex (see Example 1),these conditions are not sufficient. A comparison of these necessary conditions with the exact result ofTheorem 3 in the two-qubit case is provided by Figure 3.Figure 3: The region of tuples ( µ , µ , µ ) ∈ R with the property that ( µ , µ , µ , µ ) is the spectrumof an entanglement witness, where µ = − µ − µ − µ , is displayed in green (as in Figure 2). Theorange region depicts the extra points that are not spectra of entanglement witnesses, but cannot be ruledout by the necessary conditions of Theorem 4. The orange region looks like a convex “shield” in front ofthe non-convex green region. Example 3.
Consider the vector of eigenvalues (cid:126) µ = (( + √ ) , ( + √ ) , 1, 1, c , c ) as the spectrumof an entanglement witness in M ( C ) ⊗ M ( C ) , where > c ∈ R . We can determine a bound on themost negative eigenvalue c by plugging (cid:126) µ into part (d) of Theorem 4. In this case, the tightest restrictionson c come from the inequality q ≥ , so this is the only one we explicitly work through. First, we compute q in terms of c : q = s − s − = ( c + ( + √ ) c + ( + √ ) /4 ) − c = − c + ( + √ ) c + + √ ≥ The above quadratic inequality limits c to being between its two roots, which are − ( + √ ) ± (cid:113) ( + √ ) − ( − )( + √ ) ( − ) = ( + √ ) ± ( + √ ) It follows that c cannot be smaller than the lesser of these two roots—i.e., (cid:126) µ ∈ Conv (cid:0) σ ( BP ) (cid:1) if andonly if c ≥ − ( + √ ) /6 .Alternatively, we can use part (c) of Theorem 4 to see that (cid:126) µ ∈ Conv (cid:0) σ ( BP ) (cid:1) when c ≥ − ( + √ ) /6 , since in this case we can choose the positive semidefinite matrix X ∈ M ( R ) to be X = (cid:20) ( + √ ) /2 cc ( + √ ) /2 + c + (cid:21) , nd it is straightforward to verify this matrix satisfies the conditions of part (c) of the theorem.It is worth noting that we explicitly constructed an entanglement witness with spectrum (cid:126) µ when c =( − √ ) /2 in Example 2. However, we have now shown that the minimal value of c is − ( + √ ) /6 ≈− < − ≈ ( − √ ) /2 . We have not been able to explicitly construct an entanglementwitness with spectrum (cid:126) µ when c = − ( + √ ) /6 , and in fact one might not even exist since σ ( BP ) (cid:54) = Conv (cid:0) σ ( BP ) (cid:1) . However, we can explicitly demonstrate that (cid:126) µ ∈ Conv (cid:0) σ ( BP ) (cid:1) in this case, sinceLemma 1 tells us that (
1, 1, 1, 0, 0, − ) ∈ σ ( BP ) , so (
1, 1, 1, 0, 0, − ) + (
1, 1, 0, 1, −
1, 0 ) = (
2, 2, 1, 1, − − ) ∈ Conv (cid:0) σ ( BP ) (cid:1) , which implies that if c = − ( + √ ) /6 then √ (
2, 2, 1, 1, − − ) + − √ (
1, 1, 2, 2, − − )= (( + √ ) /2, ( + √ ) /2, 1, 1, c , c ) ∈ Conv (cid:0) σ ( BP ) (cid:1) . We now consider the case of entanglement witnesses in arbitrary dimensions. This case is much moredifficult for two reasons. First, we do not know whether or not ASEP m , n = APPT m , n when m , n ≥ [32], and thus we similarly do not know that Conv (cid:0) σ ( BP m , n ) (cid:1) = Conv (cid:0) σ ( DBP m , n ) (cid:1) . Second, thecharacterization of APPT m , n is quite complicated and requires further explanation when min { m , n } ≥ .For simplicity, we assume for the remainder of this section that m ≤ n .We now provide more details about the characterization of absolutely PPT states, but for a full andrigorous description, we refer the interested reader to [24]. We start by constructing several linear maps L j : M m ( R ) → R mn . To illustrate how these linear maps { L j } are constructed, recall from Lemma 1 thatif a pure state | v (cid:105) ∈ C m ⊗ C n has Schmidt coefficients α ≥ α ≥ · · · ≥ α m ≥ , then the eigenvaluesof ( | v (cid:105)(cid:104) v | ) Γ are the numbers α j ( ≤ j ≤ m ), ± α i α j ( ≤ i (cid:54) = j ≤ m ), and possibly some extra zeroes.Well, let’s consider the possible orderings of those numbers. For example, if m = then the only possibleordering is α ≥ α α ≥ α ≥ ≥ · · · ≥ ≥ − α α , whereas if m = then there are two possibleorderings: α ≥ α α ≥ α α ≥ α ≥ α α ≥ α ≥ ≥ · · · ≥ ≥ − α α ≥ − α α ≥ − α α or α ≥ α α ≥ α ≥ α α ≥ α α ≥ α ≥ ≥ · · · ≥ ≥ − α α ≥ − α α ≥ − α α . Well, for each of these orderings, we associate a linear map L j : M m ( R ) → R mn by placing ± y i , j intothe position of L j ( Y ) where ± α i α j appears in the associated ordering (and actually L j is just a linear mapon symmetric matrices, not all matrices, so that we do not have to worry about distinguishing between y i , j and y j , i ). For example, in the m = case, there is just one linear map L : M ( R ) → R n , and it is L (cid:18)(cid:20) y y y y (cid:21)(cid:19) = ( y , y , y , 0, . . . , 0, − y ) . m = case there are two linear maps L , L : M ( R ) → R n , and they are L y y y y y y y y y = ( y , y , y , y , y , y , 0, . . . , 0, − y , − y , − y ) and L y y y y y y y y y = ( y , y , y , y , y , y , 0, . . . , 0, − y , − y , − y ) . (14)The number of distinct possible orderings (and thus the number of linear maps to be considered)grows exponentially in m . For m =
2, 3, 4, . . . , this quantity equals
1, 2, 10, 114, 2608, 107498, . . . , thoughno formula is known for computing it in general [35].The main result of [24] says that (cid:126) λ ∈ APPT m , n if and only if L ∗ j ( (cid:126) λ ↑ ) (cid:23) O for all j , where (cid:126) λ ↑ is thevector with the same entries as (cid:126) λ sorted in non-decreasing order (rather than the usual non-increasing orderwe have used previously in this paper). For example, when m = n = , the result says that (cid:126) λ ∈ APPT if and only if L ∗ ( (cid:126) λ ↑ ) (cid:23) O and L ∗ ( (cid:126) λ ↑ ) (cid:23) O , where L ∗ ( (cid:126) λ ↑ ) = λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ and L ∗ ( (cid:126) λ ↑ ) = λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ − λ ↓ λ ↓ . The last tool that we need to be able to state our characterization of
Conv (cid:0) σ ( DBP m , n ) (cid:1) is a function p : R n → R n that computes the partial sums of a vector: p ( (cid:126) v ) def = (cid:32) n ∑ j = v j , n ∑ j = v j , n ∑ j = v j , . . . , n ∑ j = n − v j , v n (cid:33) . Theorem 5.
Suppose (cid:126) µ ∈ R mn and m ≤ n . Then the following are equivalent:a) (cid:126) µ ∈ Conv (cid:0) σ ( DBP m , n ) (cid:1) .b) There exist positive semidefinite matrices Y , Y , . . . ∈ M m ( R ) such that ∑ j p (cid:0) L j ( Y j ) (cid:1) ≤ p (cid:0) µ ↓ (cid:1) . Before proving this theorem, we note that in the m = case, it reduces exactly to the equivalence ofconditions (b) and (c) in Theorem 4. Also, the in previous sections we had defined a vector (cid:126) s containingthat partial sums of µ ↓ —this vector (cid:126) s is exactly p ( µ ↓ ) . Proof.
Again, we start by using Corollary 1 to note that (cid:126) µ ∈ Conv (cid:0) σ ( DBP m , n ) (cid:1) if and only if (cid:126) µ ∈ APPT ◦ m , n , if and only if the optimal value of the following semidefinite program is non-negative:minimize: n ∑ j = µ ↓ j λ n + − j subject to: L ∗ j ( (cid:126) λ ↑ ) (cid:23) O for all j mn ∑ j = λ j = λ ≥ λ ≥ · · · ≥ λ mn ≥ (15)19he dual of this semidefinite program is as follows:maximize: d subject to: ∑ j L j ( Y j ) − ( (cid:126) z , 0 ) + ( (cid:126) z ) + d (cid:126) ≤ (cid:126) µ ↓ ≤ d ∈ R , (cid:126) ≤ (cid:126) z ∈ R mn − O (cid:22) Y j ∈ M m ( R ) for all j , (16)where (cid:126) ∈ R mn is the vector with each entry equal to . Well, strong duality holds for this semidefiniteprogram for reasons almost identical to those provided in the proof of Theorem 4, so we just want todetermine whether or not this dual problem has a feasible point with d ≥ . In other words, we want toknow whether or not there exists (cid:126) ≤ (cid:126) z ∈ R mn − and positive semidefinite matrices { Y j } ⊆ M m ( R ) suchthat ∑ j L j ( Y j ) − ( (cid:126) z , 0 ) + ( (cid:126) z ) ≤ (cid:126) µ ↓ . (17)Well, applying the same quantifier elimination techniques from the proof of Theorem 4 to the vector (cid:126) z shows that inequality (17) is equivalent to the existence of positive semidefinite matrices { Y j } ⊆ M m ( R ) such that ∑ j p ( L j ( Y j )) ≤ p ( (cid:126) µ ↓ ) , (18)which completes the proof.For example, in the m = n = case, we recall the two maps L , L from Equation (14). After workingthrough the details, we see that Theorem 5 says that (cid:126) µ ∈ Conv (cid:0) σ ( DBP ) (cid:1) if and only if there exist × positive semidefinite matrices X , Y ∈ M ( R ) such that ( x + x + x ) + ( y + y + y ) ≤ µ ↓ + µ ↓ + · · · + µ ↓ ( x + x ) + ( y + y ) ≤ µ ↓ + µ ↓ + · · · + µ ↓ ( x + x − x ) + ( y + y − y ) ≤ µ ↓ + µ ↓ + · · · + µ ↓ ( x − x ) + ( y + y − y − y ) ≤ µ ↓ + µ ↓ + · · · + µ ↓ ( x − x − x ) + ( y − y − y ) ≤ µ ↓ + µ ↓ + · · · + µ ↓ ( x − x − x − x ) + ( y − y − y − y ) ≤ µ ↓ + µ ↓ + µ ↓ + µ ↓ ( − x − x − x ) + ( − y − y − y ) ≤ µ ↓ + µ ↓ + µ ↓ ( − x − x ) + ( − y − y ) ≤ µ ↓ + µ ↓ − x − y ≤ µ ↓ (19)Note in particular that the inequality involving µ ↓ + · · · + µ ↓ is not symmetric in X and Y , and thisinequality is the reason that we require two positive semidefinite matrices X and Y rather than just one. Ingeneral, the number of inequalities to be checked is min { m , n } , but the number of positive semidefinitematrices invovled (and the number of terms being added up in each inequality) grows exponentially in min { m , n } (as we discussed earlier). 20hile this system of inequalities is not the type of thing that we expect to be able to solve analyticallylike how we did in the qubit-qudit cases (condition (d) of Theorem 4), the existence of X and Y can be de-termined numerically via semidefinite programming (e.g., in the CVX package for MATLAB [36]). Thusthese inequalities can be used computationally as necessary conditions that eigenvalues of entanglementwitnesses must satisfy. Example 4.
Suppose c ∈ R and consider the vector of eigenvalues (cid:126) µ = (
1, 1, 1, 1, 1, 1, − − c ) . Todetermine which values of c result in (cid:126) µ ∈ Conv (cid:0) σ ( DBP ) (cid:1) , we note that it is straightforward to verifythat (cid:126) λ : = (
2, 2, 2, 1, 1, 1, 1, 1, 1 ) /12 ∈ APPT and ∑ j = µ ↓ j λ − j = ( c − − + + + + + + ) /12 = ( c + ) /6, which is negative when c < − . We conclude from Corollary 1 that (cid:126) µ (cid:54)∈ APPT ◦ = Conv (cid:0) σ ( DBP ) (cid:1) whenever c < − (and in particular, there does not exist a decomposable entanglement witness witheigenvalues (
1, 1, 1, 1, 1, 1, − − c ) when c < − ).On the other hand, we can see that (cid:126) µ ∈ Conv (cid:0) σ ( DBP ) (cid:1) whenever c ≥ − via the system ofinequalities (19) . In particular, X = Y = is positive semidefinite and satisfies the system of inequalities (19) when c ≥ − . By putting these twofacts together, we see that (cid:126) µ ∈ Conv (cid:0) σ ( DBP ) (cid:1) if and only if c ≥ − .It is worth noting that we can also see directly that (cid:126) µ ∈ σ ( DBP ) whenever c ≥ − , since if | v (cid:105) = | (cid:105) ⊗ | (cid:105) + | (cid:105) ⊗ | (cid:105) + | (cid:105) ⊗ | (cid:105) then ( | v (cid:105)(cid:104) v | ) Γ has eigenvalues (
1, 1, 1, 1, 1, 1, − − − ) , andwe can increase the smallest eigenvalue by adding a suitable positive semidefinite matrix to ( | v (cid:105)(cid:104) v | ) Γ . Theorem 5 only applies to decomposable entanglement witnesses, since we do not know whetheror not ASEP m , n = APPT m , n when m , n ≥ . This result gives us a new avenue of approaching theabsolute separability problem, since if we can find an entanglement witness with eigenvalues (cid:126) µ such that (cid:126) µ (cid:54)∈ Conv (cid:0) σ ( DBP m , n ) (cid:1) then it follows that ASEP m , n (cid:40) APPT m , n . For example, in light of Example 4we know that if there exists an entanglement witness with eigenvalues (
1, 1, 1, 1, 1, 1, − − c ) for some c < − , then ASEP (cid:54) = APPT . We have introduced the inverse eigenvalue problem for the set of entanglement witnesses (or equivalently,the set of block-positive matrices). We completely solved this problem in the smallest-dimensional case of M ( C ) ⊗ M ( C ) in Theorem 3, and we provided a strong set of necessary conditions for the M ( C ) ⊗ M n ( C ) in Theorem 4(d). We then provided a general method for constructing necessary conditions for thespectra of decomposable entanglement witnesses in arbitrary dimensions in Theorem 5, and we illustrateda duality relationship with absolute separability that provides a new line of attack for approaching thequestion of whether or not ASEP m , n = APPT m , n .The most pressing open question resulting from this work is whether or not there exists an entan-glement witness with spectrum (cid:126) µ (cid:54)∈ Conv (cid:0) σ ( DBP m , n ) (cid:1) —finding such an entanglement witness wouldshow that ASEP m , n (cid:40) APPT m , n . Another question that might be reasonably solvable is that of finding21xact conditions for membership in σ ( BP ) . We know that every W ∈ BP can be written in the form W = X Γ + Y , but the fact that W can have two negative eigenvalues (and thus X cannot be chosen to haverank ) makes it difficult to pin down exactly how negative its eigenvalues can be. For example, what arethe restrictions on c in Example 3? We showed in that example that (cid:126) µ ∈ Conv (cid:0) σ ( BP ) (cid:1) if and only if c ≥ − ( + √ ) /6 ≈ − . However, we have not been able to find an entanglement witness withspectrum (cid:126) µ for any c < ( − √ ) /2 ≈ − . Acknowledgements.
N.J. was supported by NSERC Discovery Grant number RGPIN-2016-04003.
References [1] P. D. Egleston, T. D. Lenker, and S. K. Narayan, “The nonnegative inverse eigenvalue problem,”
Linear Algebra and its Applications , vol. 379, pp. 475–490, 2004.[2] G. W. Soules, “Constructing symmetric nonnegative matrices,”
Linear and Multilinear Algebra ,vol. 13, pp. 241–251, 1983.[3] N. Dmitriev and E. Dynkin, “On characteristic roots of stochastic matrices (russian, with englishsummary),”
Bull. Acad. Sci. URSS. Sér. Math. [Izvestia Akad. Nauk SSSR] , vol. 10, pp. 167–184,1946.[4] C. Johnson, “Row stochastic matrices similar to doubly stochastic matrices,”
Linear and MultilinearAlgebra , vol. 10, pp. 113–130, 1981.[5] H. Perfect and L. Mirsky, “Spectral properties of doubly-stochastic matrices,”
Monatshefte für Math-ematik , vol. 69, pp. 35–57, 1965.[6] S.-G. Hwang and S.-S. Pyo, “The inverse eigenvalue problem for symmetric doubly stochastic ma-trices,”
Linear Algebra and its Applications , vol. 379, pp. 77–83, 2004.[7] H. J. Landau, “The inverse eigenvalue problem for real symmetric Toepliz matrices,”
Journal of theAmerican Mathematical Society , vol. 7, pp. 749–767, 1994.[8] H. Pickmann, R. L. Soto, J. Egaña, and M. Salas, “An inverse eigenvalue problem for symmetricaltridiagonal matrices,”
Computers & Mathematics with Applications , vol. 54, pp. 699–708, 2007.[9] M. T. Chu, “Inverse eigenvalue problems,”
SIAM Review , vol. 40, pp. 1–39, 1998.[10] L. Gurvits, “Classical deterministic complexity of Edmonds’ problem and quantum entanglement,”in
Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing , pp. 10–19,2003.[11] S. Gharibian, “Strong NP-hardness of the quantum separability problem,”
Quantum Information andComputation , vol. 10, pp. 343–360, 2010.[12] O. Gühne and G. Toth, “Entanglement detection,”
Physics Reports , vol. 474, pp. 1–75, 2009.[13] R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, “Quantum entanglement,”
Reviews ofModern Physics , vol. 81, pp. 865–942, 2009.[14] A. Peres, “Separability criterion for density matrices,”
Physical Review Letters , vol. 77, pp. 1413–1415, 1996. 2215] E. Størmer, “Positive linear maps of operator algebras,”
Acta Mathematica , vol. 110, pp. 233–278,1963.[16] S. L. Woronowicz, “Positive maps of low dimensional matrix algebras,”
Reports on MathematicalPhysics , vol. 10, pp. 165–183, 1976.[17] K. R. Parthasarathy, “On the maximal dimension of a completely entangled subspace for finite levelquantum systems,”
Proc. Indian Acad. Sci. (Math. Sci.) , vol. 114, pp. 365–374, 2004.[18] G. Sarbicki, “Spectral properties of entanglement witnesses,”
Journal of Physics A: Mathematicaland Theoretical , vol. 41, p. 375303, 2008.[19] N. Johnston, “Non-positive partial transpose subspaces can be as large as any entangled subspace,”
Physical Review A , vol. 87, p. 064302, 2013.[20] N. Johnston and D. W. Kribs, “A family of norms with applications in quantum information theory,”
J. Math. Phys. , vol. 51, p. 082202, 2010.[21] N. Johnston,
Norms and Cones in the Theory of Quantum Entanglement . PhD thesis, University ofGuelph, 2012.[22] S. J. Szarek, E. Werner, and K. Zyczkowski, “Geometry of sets of quantum maps: a generic posi-tive map acting on a high-dimensional system is not completely positive,”
Journal of MathematicalPhysics , vol. 49, p. 032113, 2008.[23] S. Rana, “Negative eigenvalues of partial transposition of arbitrary bipartite states,”
Phys. Rev. A ,vol. 87, p. 054301, 2013.[24] R. Hildebrand, “Positive partial transpose from spectra,”
Phys. Rev. A , vol. 76, p. 052325, 2007.[25] M. A. Nielsen and I. L. Chuang,
Quantum computation and quantum information . Cambridge Uni-versity Press, 2000.[26] N. Ganguly, J. Chatterjee, and A. S. Majumdar, “Witness of mixed separable states useful for entan-glement creation,”
Physical Review A , vol. 89, p. 052304, 2014.[27] S. Boyd and L. Vandenberghe,
Convex optimization . Cambridge University Press, 2004.[28] J. Watrous, “Theory of quantum information lecture notes.” Published electronically at , 2011.[29] M. Ku´s and K. ˙Zyczkowski, “Geometry of entangled states,”
Phys. Rev. A , vol. 63, p. 032307, 2001.[30] N. Johnston, “Separability from spectrum for qubit–qudit states,”
Phys. Rev. A , vol. 88, p. 062330,2013.[31] F. Verstraete, K. Audenaert, and B. D. Moor, “Maximally entangled mixed states of two qubits,”
Phys. Rev. A , vol. 64, p. 012316, 2001.[32] S. Arunachalam, N. Johnston, and V. Russo, “Is absolute separability determined by the partial trans-pose?,”
Quantum Information and Computation , vol. 15, pp. 0694–0720, 2015.[33] R. Bhatia,
Matrix analysis . Springer, 1997. 2334] J. M. Leinaas, J. Myrheim, and E. Ovrum, “Geometrical aspects of entanglement,”
Physical ReviewA , vol. 74, p. 012313, 2006.[35] N. Johnston, “The On-Line Encyclopedia of Integer Sequences.” A237749, February 2014. Thenumber of possible orderings of the real numbers x i x j ( i ≤ j ), subject to the constraint that x > x > ... > x n >0